00:00:00.001 Started by upstream project "autotest-per-patch" build number 132370 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.128 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.791 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.804 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.817 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.817 > git config core.sparsecheckout # timeout=10 00:00:05.828 > git read-tree -mu HEAD # timeout=10 00:00:05.843 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.862 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.863 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.951 [Pipeline] Start of Pipeline 00:00:05.966 [Pipeline] library 00:00:05.968 Loading library shm_lib@master 00:00:05.968 Library shm_lib@master is cached. Copying from home. 00:00:05.984 [Pipeline] node 00:00:05.994 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.996 [Pipeline] { 00:00:06.005 [Pipeline] catchError 00:00:06.006 [Pipeline] { 00:00:06.018 [Pipeline] wrap 00:00:06.027 [Pipeline] { 00:00:06.037 [Pipeline] stage 00:00:06.040 [Pipeline] { (Prologue) 00:00:06.271 [Pipeline] sh 00:00:06.556 + logger -p user.info -t JENKINS-CI 00:00:06.576 [Pipeline] echo 00:00:06.578 Node: WFP8 00:00:06.586 [Pipeline] sh 00:00:06.887 [Pipeline] setCustomBuildProperty 00:00:06.900 [Pipeline] echo 00:00:06.901 Cleanup processes 00:00:06.905 [Pipeline] sh 00:00:07.188 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.188 3207242 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.202 [Pipeline] sh 00:00:07.483 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.483 ++ grep -v 'sudo pgrep' 00:00:07.483 ++ awk '{print $1}' 00:00:07.483 + sudo kill -9 00:00:07.483 + true 00:00:07.494 [Pipeline] cleanWs 00:00:07.503 [WS-CLEANUP] Deleting project workspace... 00:00:07.503 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.509 [WS-CLEANUP] done 00:00:07.513 [Pipeline] setCustomBuildProperty 00:00:07.526 [Pipeline] sh 00:00:07.803 + sudo git config --global --replace-all safe.directory '*' 00:00:07.888 [Pipeline] httpRequest 00:00:08.521 [Pipeline] echo 00:00:08.522 Sorcerer 10.211.164.20 is alive 00:00:08.529 [Pipeline] retry 00:00:08.531 [Pipeline] { 00:00:08.542 [Pipeline] httpRequest 00:00:08.546 HttpMethod: GET 00:00:08.546 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.547 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.562 Response Code: HTTP/1.1 200 OK 00:00:08.569 Success: Status code 200 is in the accepted range: 200,404 00:00:08.569 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.875 [Pipeline] } 00:00:15.896 [Pipeline] // retry 00:00:15.906 [Pipeline] sh 00:00:16.192 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.209 [Pipeline] httpRequest 00:00:16.614 [Pipeline] echo 00:00:16.616 Sorcerer 10.211.164.20 is alive 00:00:16.624 [Pipeline] retry 00:00:16.626 [Pipeline] { 00:00:16.639 [Pipeline] httpRequest 00:00:16.643 HttpMethod: GET 00:00:16.643 URL: http://10.211.164.20/packages/spdk_876509865ec375f83a6c8c00e2dfe215fc979f1f.tar.gz 00:00:16.644 Sending request to url: http://10.211.164.20/packages/spdk_876509865ec375f83a6c8c00e2dfe215fc979f1f.tar.gz 00:00:16.662 Response Code: HTTP/1.1 200 OK 00:00:16.662 Success: Status code 200 is in the accepted range: 200,404 00:00:16.663 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_876509865ec375f83a6c8c00e2dfe215fc979f1f.tar.gz 00:00:53.459 [Pipeline] } 00:00:53.477 [Pipeline] // retry 00:00:53.486 [Pipeline] sh 00:00:53.771 + tar --no-same-owner -xf spdk_876509865ec375f83a6c8c00e2dfe215fc979f1f.tar.gz 00:00:56.317 [Pipeline] sh 00:00:56.601 + git -C spdk log --oneline -n5 00:00:56.601 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:00:56.601 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:00:56.601 bb53e3ad9 test/nvme/xnvme: Drop null_blk 00:00:56.601 ace52fb4b test/nvme/xnvme: Tidy the test suite 00:00:56.601 46fd068fc test/nvme/xnvme: Add io_uring_cmd 00:00:56.613 [Pipeline] } 00:00:56.626 [Pipeline] // stage 00:00:56.636 [Pipeline] stage 00:00:56.638 [Pipeline] { (Prepare) 00:00:56.656 [Pipeline] writeFile 00:00:56.673 [Pipeline] sh 00:00:56.960 + logger -p user.info -t JENKINS-CI 00:00:56.973 [Pipeline] sh 00:00:57.256 + logger -p user.info -t JENKINS-CI 00:00:57.268 [Pipeline] sh 00:00:57.551 + cat autorun-spdk.conf 00:00:57.551 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.551 SPDK_TEST_NVMF=1 00:00:57.551 SPDK_TEST_NVME_CLI=1 00:00:57.551 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.551 SPDK_TEST_NVMF_NICS=e810 00:00:57.551 SPDK_TEST_VFIOUSER=1 00:00:57.551 SPDK_RUN_UBSAN=1 00:00:57.551 NET_TYPE=phy 00:00:57.558 RUN_NIGHTLY=0 00:00:57.563 [Pipeline] readFile 00:00:57.587 [Pipeline] withEnv 00:00:57.588 [Pipeline] { 00:00:57.599 [Pipeline] sh 00:00:57.881 + set -ex 00:00:57.882 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:57.882 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.882 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.882 ++ SPDK_TEST_NVMF=1 00:00:57.882 ++ SPDK_TEST_NVME_CLI=1 00:00:57.882 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.882 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.882 ++ SPDK_TEST_VFIOUSER=1 00:00:57.882 ++ SPDK_RUN_UBSAN=1 00:00:57.882 ++ NET_TYPE=phy 00:00:57.882 ++ RUN_NIGHTLY=0 00:00:57.882 + case $SPDK_TEST_NVMF_NICS in 00:00:57.882 + DRIVERS=ice 00:00:57.882 + [[ tcp == \r\d\m\a ]] 00:00:57.882 + [[ -n ice ]] 00:00:57.882 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:57.882 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:57.882 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:57.882 rmmod: ERROR: Module irdma is not currently loaded 00:00:57.882 rmmod: ERROR: Module i40iw is not currently loaded 00:00:57.882 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:57.882 + true 00:00:57.882 + for D in $DRIVERS 00:00:57.882 + sudo modprobe ice 00:00:57.882 + exit 0 00:00:57.893 [Pipeline] } 00:00:57.909 [Pipeline] // withEnv 00:00:57.914 [Pipeline] } 00:00:57.926 [Pipeline] // stage 00:00:57.934 [Pipeline] catchError 00:00:57.936 [Pipeline] { 00:00:57.948 [Pipeline] timeout 00:00:57.948 Timeout set to expire in 1 hr 0 min 00:00:57.950 [Pipeline] { 00:00:57.964 [Pipeline] stage 00:00:57.967 [Pipeline] { (Tests) 00:00:57.983 [Pipeline] sh 00:00:58.274 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.274 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.274 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.274 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.274 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.274 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.274 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.274 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.274 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.274 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.274 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:58.274 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.274 + source /etc/os-release 00:00:58.274 ++ NAME='Fedora Linux' 00:00:58.274 ++ VERSION='39 (Cloud Edition)' 00:00:58.274 ++ ID=fedora 00:00:58.274 ++ VERSION_ID=39 00:00:58.274 ++ VERSION_CODENAME= 00:00:58.274 ++ PLATFORM_ID=platform:f39 00:00:58.274 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:58.274 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.274 ++ LOGO=fedora-logo-icon 00:00:58.274 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:58.274 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.274 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:58.274 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.274 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.274 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.274 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:58.274 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.274 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:58.274 ++ SUPPORT_END=2024-11-12 00:00:58.274 ++ VARIANT='Cloud Edition' 00:00:58.274 ++ VARIANT_ID=cloud 00:00:58.274 + uname -a 00:00:58.274 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:58.274 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.824 Hugepages 00:01:00.824 node hugesize free / total 00:01:00.824 node0 1048576kB 0 / 0 00:01:00.824 node0 2048kB 0 / 0 00:01:00.824 node1 1048576kB 0 / 0 00:01:00.824 node1 2048kB 0 / 0 00:01:00.824 00:01:00.824 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.824 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:00.824 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:00.824 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:00.824 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:00.824 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:00.824 + rm -f /tmp/spdk-ld-path 00:01:00.824 + source autorun-spdk.conf 00:01:00.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.824 ++ SPDK_TEST_NVMF=1 00:01:00.824 ++ SPDK_TEST_NVME_CLI=1 00:01:00.824 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.824 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.824 ++ SPDK_TEST_VFIOUSER=1 00:01:00.824 ++ SPDK_RUN_UBSAN=1 00:01:00.824 ++ NET_TYPE=phy 00:01:00.824 ++ RUN_NIGHTLY=0 00:01:00.824 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:00.824 + [[ -n '' ]] 00:01:00.824 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.824 + for M in /var/spdk/build-*-manifest.txt 00:01:00.824 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:00.825 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.825 + for M in /var/spdk/build-*-manifest.txt 00:01:00.825 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:00.825 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.825 + for M in /var/spdk/build-*-manifest.txt 00:01:00.825 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:00.825 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.825 ++ uname 00:01:00.825 + [[ Linux == \L\i\n\u\x ]] 00:01:00.825 + sudo dmesg -T 00:01:01.084 + sudo dmesg --clear 00:01:01.084 + dmesg_pid=3208261 00:01:01.084 + [[ Fedora Linux == FreeBSD ]] 00:01:01.084 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.084 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.084 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.084 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.084 + sudo dmesg -Tw 00:01:01.084 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.084 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.084 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.084 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.084 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.084 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.084 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.084 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.084 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.084 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.084 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.084 10:18:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.084 10:18:01 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:01.084 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:01.085 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:01.085 10:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:01.085 10:18:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.085 10:18:01 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.085 10:18:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.085 10:18:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.085 10:18:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.085 10:18:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.085 10:18:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.085 10:18:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.085 10:18:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.085 10:18:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.085 10:18:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.085 10:18:01 -- paths/export.sh@5 -- $ export PATH 00:01:01.085 10:18:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.085 10:18:01 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.085 10:18:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:01.085 10:18:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732094281.XXXXXX 00:01:01.085 10:18:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732094281.2p8oTZ 00:01:01.085 10:18:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:01.085 10:18:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:01.085 10:18:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.085 10:18:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.085 10:18:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.085 10:18:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:01.085 10:18:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:01.085 10:18:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.085 10:18:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.085 10:18:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:01.085 10:18:01 -- pm/common@17 -- $ local monitor 00:01:01.085 10:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.085 10:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.085 10:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.085 10:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.085 10:18:01 -- pm/common@25 -- $ sleep 1 00:01:01.085 10:18:01 -- pm/common@21 -- $ date +%s 00:01:01.085 10:18:01 -- pm/common@21 -- $ date +%s 00:01:01.085 10:18:01 -- pm/common@21 -- $ date +%s 00:01:01.085 10:18:01 -- pm/common@21 -- $ date +%s 00:01:01.085 10:18:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094281 00:01:01.085 10:18:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094281 00:01:01.085 10:18:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094281 00:01:01.085 10:18:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094281 00:01:01.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094281_collect-cpu-load.pm.log 00:01:01.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094281_collect-vmstat.pm.log 00:01:01.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094281_collect-cpu-temp.pm.log 00:01:01.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094281_collect-bmc-pm.bmc.pm.log 00:01:02.281 10:18:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:02.281 10:18:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.281 10:18:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.281 10:18:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.281 10:18:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.281 Wed Nov 20 09:18:02 AM UTC 2024 00:01:02.281 10:18:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.281 v25.01-pre-211-g876509865 00:01:02.281 10:18:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.281 10:18:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.281 10:18:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.281 10:18:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.281 10:18:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.281 10:18:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.281 ************************************ 00:01:02.281 START TEST ubsan 00:01:02.281 ************************************ 00:01:02.281 10:18:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:02.281 using ubsan 00:01:02.281 00:01:02.281 real 0m0.000s 00:01:02.281 user 0m0.000s 00:01:02.281 sys 0m0.000s 00:01:02.281 10:18:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.281 10:18:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.281 ************************************ 00:01:02.281 END TEST ubsan 00:01:02.281 ************************************ 00:01:02.281 10:18:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.281 10:18:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.281 10:18:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.281 10:18:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.281 10:18:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.281 10:18:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.281 10:18:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.281 10:18:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.281 10:18:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:02.540 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:02.540 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.799 Using 'verbs' RDMA provider 00:01:15.997 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.209 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.209 Creating mk/config.mk...done. 00:01:28.209 Creating mk/cc.flags.mk...done. 00:01:28.209 Type 'make' to build. 00:01:28.209 10:18:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:28.209 10:18:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.209 10:18:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.209 10:18:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.209 ************************************ 00:01:28.209 START TEST make 00:01:28.209 ************************************ 00:01:28.209 10:18:28 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:28.209 make[1]: Nothing to be done for 'all'. 00:01:29.590 The Meson build system 00:01:29.590 Version: 1.5.0 00:01:29.590 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:29.590 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.590 Build type: native build 00:01:29.590 Project name: libvfio-user 00:01:29.590 Project version: 0.0.1 00:01:29.590 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:29.590 C linker for the host machine: cc ld.bfd 2.40-14 00:01:29.590 Host machine cpu family: x86_64 00:01:29.590 Host machine cpu: x86_64 00:01:29.590 Run-time dependency threads found: YES 00:01:29.590 Library dl found: YES 00:01:29.590 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:29.590 Run-time dependency json-c found: YES 0.17 00:01:29.590 Run-time dependency cmocka found: YES 1.1.7 00:01:29.590 Program pytest-3 found: NO 00:01:29.590 Program flake8 found: NO 00:01:29.590 Program misspell-fixer found: NO 00:01:29.590 Program restructuredtext-lint found: NO 00:01:29.590 Program valgrind found: YES (/usr/bin/valgrind) 00:01:29.590 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.590 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.590 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.590 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.590 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:29.590 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:29.590 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.590 Build targets in project: 8 00:01:29.590 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:29.590 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:29.590 00:01:29.590 libvfio-user 0.0.1 00:01:29.590 00:01:29.590 User defined options 00:01:29.590 buildtype : debug 00:01:29.590 default_library: shared 00:01:29.590 libdir : /usr/local/lib 00:01:29.590 00:01:29.590 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.160 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.418 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.418 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:30.418 [3/37] Compiling C object samples/null.p/null.c.o 00:01:30.418 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.418 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:30.418 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:30.418 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:30.418 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.418 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:30.418 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:30.418 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:30.418 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:30.418 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:30.418 [14/37] Compiling C object samples/server.p/server.c.o 00:01:30.418 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:30.418 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:30.418 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:30.418 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:30.418 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:30.418 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:30.418 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:30.418 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:30.418 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:30.418 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:30.418 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:30.418 [26/37] Compiling C object samples/client.p/client.c.o 00:01:30.418 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:30.418 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:30.418 [29/37] Linking target samples/client 00:01:30.418 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:30.418 [31/37] Linking target test/unit_tests 00:01:30.676 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:30.676 [33/37] Linking target samples/null 00:01:30.676 [34/37] Linking target samples/gpio-pci-idio-16 00:01:30.676 [35/37] Linking target samples/server 00:01:30.676 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:30.676 [37/37] Linking target samples/lspci 00:01:30.676 INFO: autodetecting backend as ninja 00:01:30.676 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.676 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.243 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.243 ninja: no work to do. 00:01:36.511 The Meson build system 00:01:36.511 Version: 1.5.0 00:01:36.511 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:36.511 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:36.511 Build type: native build 00:01:36.511 Program cat found: YES (/usr/bin/cat) 00:01:36.511 Project name: DPDK 00:01:36.511 Project version: 24.03.0 00:01:36.511 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.511 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.511 Host machine cpu family: x86_64 00:01:36.511 Host machine cpu: x86_64 00:01:36.511 Message: ## Building in Developer Mode ## 00:01:36.511 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.511 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.511 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.511 Program python3 found: YES (/usr/bin/python3) 00:01:36.511 Program cat found: YES (/usr/bin/cat) 00:01:36.511 Compiler for C supports arguments -march=native: YES 00:01:36.511 Checking for size of "void *" : 8 00:01:36.511 Checking for size of "void *" : 8 (cached) 00:01:36.511 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:36.511 Library m found: YES 00:01:36.511 Library numa found: YES 00:01:36.511 Has header "numaif.h" : YES 00:01:36.511 Library fdt found: NO 00:01:36.511 Library execinfo found: NO 00:01:36.511 Has header "execinfo.h" : YES 00:01:36.511 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.511 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.511 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.511 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.511 Run-time dependency openssl found: YES 3.1.1 00:01:36.511 Run-time dependency libpcap found: YES 1.10.4 00:01:36.511 Has header "pcap.h" with dependency libpcap: YES 00:01:36.511 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.511 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.511 Compiler for C supports arguments -Wformat: YES 00:01:36.511 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.511 Compiler for C supports arguments -Wformat-security: NO 00:01:36.511 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.511 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.511 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.511 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.511 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.511 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.511 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.511 Compiler for C supports arguments -Wundef: YES 00:01:36.511 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.511 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.511 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.511 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.511 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.511 Program objdump found: YES (/usr/bin/objdump) 00:01:36.511 Compiler for C supports arguments -mavx512f: YES 00:01:36.511 Checking if "AVX512 checking" compiles: YES 00:01:36.511 Fetching value of define "__SSE4_2__" : 1 00:01:36.511 Fetching value of define "__AES__" : 1 00:01:36.511 Fetching value of define "__AVX__" : 1 00:01:36.511 Fetching value of define "__AVX2__" : 1 00:01:36.511 Fetching value of define "__AVX512BW__" : 1 00:01:36.511 Fetching value of define "__AVX512CD__" : 1 00:01:36.511 Fetching value of define "__AVX512DQ__" : 1 00:01:36.511 Fetching value of define "__AVX512F__" : 1 00:01:36.511 Fetching value of define "__AVX512VL__" : 1 00:01:36.511 Fetching value of define "__PCLMUL__" : 1 00:01:36.511 Fetching value of define "__RDRND__" : 1 00:01:36.511 Fetching value of define "__RDSEED__" : 1 00:01:36.511 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.511 Fetching value of define "__znver1__" : (undefined) 00:01:36.511 Fetching value of define "__znver2__" : (undefined) 00:01:36.511 Fetching value of define "__znver3__" : (undefined) 00:01:36.511 Fetching value of define "__znver4__" : (undefined) 00:01:36.511 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.511 Message: lib/log: Defining dependency "log" 00:01:36.511 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.511 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.511 Checking for function "getentropy" : NO 00:01:36.511 Message: lib/eal: Defining dependency "eal" 00:01:36.511 Message: lib/ring: Defining dependency "ring" 00:01:36.511 Message: lib/rcu: Defining dependency "rcu" 00:01:36.511 Message: lib/mempool: Defining dependency "mempool" 00:01:36.511 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.511 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.511 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.511 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.511 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.511 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.511 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:36.511 Compiler for C supports arguments -mpclmul: YES 00:01:36.511 Compiler for C supports arguments -maes: YES 00:01:36.511 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.511 Compiler for C supports arguments -mavx512bw: YES 00:01:36.511 Compiler for C supports arguments -mavx512dq: YES 00:01:36.511 Compiler for C supports arguments -mavx512vl: YES 00:01:36.511 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.511 Compiler for C supports arguments -mavx2: YES 00:01:36.511 Compiler for C supports arguments -mavx: YES 00:01:36.511 Message: lib/net: Defining dependency "net" 00:01:36.511 Message: lib/meter: Defining dependency "meter" 00:01:36.511 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.511 Message: lib/pci: Defining dependency "pci" 00:01:36.511 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.511 Message: lib/hash: Defining dependency "hash" 00:01:36.511 Message: lib/timer: Defining dependency "timer" 00:01:36.511 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.511 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.511 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.511 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.511 Message: lib/power: Defining dependency "power" 00:01:36.511 Message: lib/reorder: Defining dependency "reorder" 00:01:36.511 Message: lib/security: Defining dependency "security" 00:01:36.511 Has header "linux/userfaultfd.h" : YES 00:01:36.511 Has header "linux/vduse.h" : YES 00:01:36.511 Message: lib/vhost: Defining dependency "vhost" 00:01:36.511 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.511 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.511 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.511 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.511 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.512 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.512 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.512 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.512 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.512 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.512 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:36.512 Configuring doxy-api-html.conf using configuration 00:01:36.512 Configuring doxy-api-man.conf using configuration 00:01:36.512 Program mandb found: YES (/usr/bin/mandb) 00:01:36.512 Program sphinx-build found: NO 00:01:36.512 Configuring rte_build_config.h using configuration 00:01:36.512 Message: 00:01:36.512 ================= 00:01:36.512 Applications Enabled 00:01:36.512 ================= 00:01:36.512 00:01:36.512 apps: 00:01:36.512 00:01:36.512 00:01:36.512 Message: 00:01:36.512 ================= 00:01:36.512 Libraries Enabled 00:01:36.512 ================= 00:01:36.512 00:01:36.512 libs: 00:01:36.512 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.512 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.512 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.512 00:01:36.512 Message: 00:01:36.512 =============== 00:01:36.512 Drivers Enabled 00:01:36.512 =============== 00:01:36.512 00:01:36.512 common: 00:01:36.512 00:01:36.512 bus: 00:01:36.512 pci, vdev, 00:01:36.512 mempool: 00:01:36.512 ring, 00:01:36.512 dma: 00:01:36.512 00:01:36.512 net: 00:01:36.512 00:01:36.512 crypto: 00:01:36.512 00:01:36.512 compress: 00:01:36.512 00:01:36.512 vdpa: 00:01:36.512 00:01:36.512 00:01:36.512 Message: 00:01:36.512 ================= 00:01:36.512 Content Skipped 00:01:36.512 ================= 00:01:36.512 00:01:36.512 apps: 00:01:36.512 dumpcap: explicitly disabled via build config 00:01:36.512 graph: explicitly disabled via build config 00:01:36.512 pdump: explicitly disabled via build config 00:01:36.512 proc-info: explicitly disabled via build config 00:01:36.512 test-acl: explicitly disabled via build config 00:01:36.512 test-bbdev: explicitly disabled via build config 00:01:36.512 test-cmdline: explicitly disabled via build config 00:01:36.512 test-compress-perf: explicitly disabled via build config 00:01:36.512 test-crypto-perf: explicitly disabled via build config 00:01:36.512 test-dma-perf: explicitly disabled via build config 00:01:36.512 test-eventdev: explicitly disabled via build config 00:01:36.512 test-fib: explicitly disabled via build config 00:01:36.512 test-flow-perf: explicitly disabled via build config 00:01:36.512 test-gpudev: explicitly disabled via build config 00:01:36.512 test-mldev: explicitly disabled via build config 00:01:36.512 test-pipeline: explicitly disabled via build config 00:01:36.512 test-pmd: explicitly disabled via build config 00:01:36.512 test-regex: explicitly disabled via build config 00:01:36.512 test-sad: explicitly disabled via build config 00:01:36.512 test-security-perf: explicitly disabled via build config 00:01:36.512 00:01:36.512 libs: 00:01:36.512 argparse: explicitly disabled via build config 00:01:36.512 metrics: explicitly disabled via build config 00:01:36.512 acl: explicitly disabled via build config 00:01:36.512 bbdev: explicitly disabled via build config 00:01:36.512 bitratestats: explicitly disabled via build config 00:01:36.512 bpf: explicitly disabled via build config 00:01:36.512 cfgfile: explicitly disabled via build config 00:01:36.512 distributor: explicitly disabled via build config 00:01:36.512 efd: explicitly disabled via build config 00:01:36.512 eventdev: explicitly disabled via build config 00:01:36.512 dispatcher: explicitly disabled via build config 00:01:36.512 gpudev: explicitly disabled via build config 00:01:36.512 gro: explicitly disabled via build config 00:01:36.512 gso: explicitly disabled via build config 00:01:36.512 ip_frag: explicitly disabled via build config 00:01:36.512 jobstats: explicitly disabled via build config 00:01:36.512 latencystats: explicitly disabled via build config 00:01:36.512 lpm: explicitly disabled via build config 00:01:36.512 member: explicitly disabled via build config 00:01:36.512 pcapng: explicitly disabled via build config 00:01:36.512 rawdev: explicitly disabled via build config 00:01:36.512 regexdev: explicitly disabled via build config 00:01:36.512 mldev: explicitly disabled via build config 00:01:36.512 rib: explicitly disabled via build config 00:01:36.512 sched: explicitly disabled via build config 00:01:36.512 stack: explicitly disabled via build config 00:01:36.512 ipsec: explicitly disabled via build config 00:01:36.512 pdcp: explicitly disabled via build config 00:01:36.512 fib: explicitly disabled via build config 00:01:36.512 port: explicitly disabled via build config 00:01:36.512 pdump: explicitly disabled via build config 00:01:36.512 table: explicitly disabled via build config 00:01:36.512 pipeline: explicitly disabled via build config 00:01:36.512 graph: explicitly disabled via build config 00:01:36.512 node: explicitly disabled via build config 00:01:36.512 00:01:36.512 drivers: 00:01:36.512 common/cpt: not in enabled drivers build config 00:01:36.512 common/dpaax: not in enabled drivers build config 00:01:36.512 common/iavf: not in enabled drivers build config 00:01:36.512 common/idpf: not in enabled drivers build config 00:01:36.512 common/ionic: not in enabled drivers build config 00:01:36.512 common/mvep: not in enabled drivers build config 00:01:36.512 common/octeontx: not in enabled drivers build config 00:01:36.512 bus/auxiliary: not in enabled drivers build config 00:01:36.512 bus/cdx: not in enabled drivers build config 00:01:36.512 bus/dpaa: not in enabled drivers build config 00:01:36.512 bus/fslmc: not in enabled drivers build config 00:01:36.512 bus/ifpga: not in enabled drivers build config 00:01:36.512 bus/platform: not in enabled drivers build config 00:01:36.512 bus/uacce: not in enabled drivers build config 00:01:36.512 bus/vmbus: not in enabled drivers build config 00:01:36.512 common/cnxk: not in enabled drivers build config 00:01:36.512 common/mlx5: not in enabled drivers build config 00:01:36.512 common/nfp: not in enabled drivers build config 00:01:36.512 common/nitrox: not in enabled drivers build config 00:01:36.512 common/qat: not in enabled drivers build config 00:01:36.512 common/sfc_efx: not in enabled drivers build config 00:01:36.512 mempool/bucket: not in enabled drivers build config 00:01:36.512 mempool/cnxk: not in enabled drivers build config 00:01:36.512 mempool/dpaa: not in enabled drivers build config 00:01:36.512 mempool/dpaa2: not in enabled drivers build config 00:01:36.512 mempool/octeontx: not in enabled drivers build config 00:01:36.512 mempool/stack: not in enabled drivers build config 00:01:36.512 dma/cnxk: not in enabled drivers build config 00:01:36.512 dma/dpaa: not in enabled drivers build config 00:01:36.512 dma/dpaa2: not in enabled drivers build config 00:01:36.512 dma/hisilicon: not in enabled drivers build config 00:01:36.512 dma/idxd: not in enabled drivers build config 00:01:36.512 dma/ioat: not in enabled drivers build config 00:01:36.512 dma/skeleton: not in enabled drivers build config 00:01:36.512 net/af_packet: not in enabled drivers build config 00:01:36.512 net/af_xdp: not in enabled drivers build config 00:01:36.512 net/ark: not in enabled drivers build config 00:01:36.512 net/atlantic: not in enabled drivers build config 00:01:36.512 net/avp: not in enabled drivers build config 00:01:36.512 net/axgbe: not in enabled drivers build config 00:01:36.512 net/bnx2x: not in enabled drivers build config 00:01:36.512 net/bnxt: not in enabled drivers build config 00:01:36.512 net/bonding: not in enabled drivers build config 00:01:36.512 net/cnxk: not in enabled drivers build config 00:01:36.512 net/cpfl: not in enabled drivers build config 00:01:36.512 net/cxgbe: not in enabled drivers build config 00:01:36.512 net/dpaa: not in enabled drivers build config 00:01:36.512 net/dpaa2: not in enabled drivers build config 00:01:36.512 net/e1000: not in enabled drivers build config 00:01:36.512 net/ena: not in enabled drivers build config 00:01:36.512 net/enetc: not in enabled drivers build config 00:01:36.512 net/enetfec: not in enabled drivers build config 00:01:36.512 net/enic: not in enabled drivers build config 00:01:36.512 net/failsafe: not in enabled drivers build config 00:01:36.512 net/fm10k: not in enabled drivers build config 00:01:36.512 net/gve: not in enabled drivers build config 00:01:36.512 net/hinic: not in enabled drivers build config 00:01:36.512 net/hns3: not in enabled drivers build config 00:01:36.512 net/i40e: not in enabled drivers build config 00:01:36.512 net/iavf: not in enabled drivers build config 00:01:36.512 net/ice: not in enabled drivers build config 00:01:36.512 net/idpf: not in enabled drivers build config 00:01:36.512 net/igc: not in enabled drivers build config 00:01:36.512 net/ionic: not in enabled drivers build config 00:01:36.512 net/ipn3ke: not in enabled drivers build config 00:01:36.512 net/ixgbe: not in enabled drivers build config 00:01:36.512 net/mana: not in enabled drivers build config 00:01:36.512 net/memif: not in enabled drivers build config 00:01:36.512 net/mlx4: not in enabled drivers build config 00:01:36.512 net/mlx5: not in enabled drivers build config 00:01:36.512 net/mvneta: not in enabled drivers build config 00:01:36.512 net/mvpp2: not in enabled drivers build config 00:01:36.512 net/netvsc: not in enabled drivers build config 00:01:36.512 net/nfb: not in enabled drivers build config 00:01:36.512 net/nfp: not in enabled drivers build config 00:01:36.512 net/ngbe: not in enabled drivers build config 00:01:36.512 net/null: not in enabled drivers build config 00:01:36.512 net/octeontx: not in enabled drivers build config 00:01:36.512 net/octeon_ep: not in enabled drivers build config 00:01:36.512 net/pcap: not in enabled drivers build config 00:01:36.512 net/pfe: not in enabled drivers build config 00:01:36.512 net/qede: not in enabled drivers build config 00:01:36.512 net/ring: not in enabled drivers build config 00:01:36.512 net/sfc: not in enabled drivers build config 00:01:36.513 net/softnic: not in enabled drivers build config 00:01:36.513 net/tap: not in enabled drivers build config 00:01:36.513 net/thunderx: not in enabled drivers build config 00:01:36.513 net/txgbe: not in enabled drivers build config 00:01:36.513 net/vdev_netvsc: not in enabled drivers build config 00:01:36.513 net/vhost: not in enabled drivers build config 00:01:36.513 net/virtio: not in enabled drivers build config 00:01:36.513 net/vmxnet3: not in enabled drivers build config 00:01:36.513 raw/*: missing internal dependency, "rawdev" 00:01:36.513 crypto/armv8: not in enabled drivers build config 00:01:36.513 crypto/bcmfs: not in enabled drivers build config 00:01:36.513 crypto/caam_jr: not in enabled drivers build config 00:01:36.513 crypto/ccp: not in enabled drivers build config 00:01:36.513 crypto/cnxk: not in enabled drivers build config 00:01:36.513 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.513 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.513 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.513 crypto/mlx5: not in enabled drivers build config 00:01:36.513 crypto/mvsam: not in enabled drivers build config 00:01:36.513 crypto/nitrox: not in enabled drivers build config 00:01:36.513 crypto/null: not in enabled drivers build config 00:01:36.513 crypto/octeontx: not in enabled drivers build config 00:01:36.513 crypto/openssl: not in enabled drivers build config 00:01:36.513 crypto/scheduler: not in enabled drivers build config 00:01:36.513 crypto/uadk: not in enabled drivers build config 00:01:36.513 crypto/virtio: not in enabled drivers build config 00:01:36.513 compress/isal: not in enabled drivers build config 00:01:36.513 compress/mlx5: not in enabled drivers build config 00:01:36.513 compress/nitrox: not in enabled drivers build config 00:01:36.513 compress/octeontx: not in enabled drivers build config 00:01:36.513 compress/zlib: not in enabled drivers build config 00:01:36.513 regex/*: missing internal dependency, "regexdev" 00:01:36.513 ml/*: missing internal dependency, "mldev" 00:01:36.513 vdpa/ifc: not in enabled drivers build config 00:01:36.513 vdpa/mlx5: not in enabled drivers build config 00:01:36.513 vdpa/nfp: not in enabled drivers build config 00:01:36.513 vdpa/sfc: not in enabled drivers build config 00:01:36.513 event/*: missing internal dependency, "eventdev" 00:01:36.513 baseband/*: missing internal dependency, "bbdev" 00:01:36.513 gpu/*: missing internal dependency, "gpudev" 00:01:36.513 00:01:36.513 00:01:36.513 Build targets in project: 85 00:01:36.513 00:01:36.513 DPDK 24.03.0 00:01:36.513 00:01:36.513 User defined options 00:01:36.513 buildtype : debug 00:01:36.513 default_library : shared 00:01:36.513 libdir : lib 00:01:36.513 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:36.513 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:36.513 c_link_args : 00:01:36.513 cpu_instruction_set: native 00:01:36.513 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:36.513 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:36.513 enable_docs : false 00:01:36.513 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:36.513 enable_kmods : false 00:01:36.513 max_lcores : 128 00:01:36.513 tests : false 00:01:36.513 00:01:36.513 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.777 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:36.777 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:36.777 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:36.777 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:36.777 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:36.777 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.777 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:36.777 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:36.777 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.777 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.037 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.037 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.037 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.037 [13/268] Linking static target lib/librte_kvargs.a 00:01:37.037 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.037 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.037 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.037 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.037 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.037 [19/268] Linking static target lib/librte_log.a 00:01:37.037 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.037 [21/268] Linking static target lib/librte_pci.a 00:01:37.038 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.038 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.300 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.300 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.300 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.300 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.300 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.300 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.300 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.300 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.300 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.300 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.300 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.300 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.300 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.300 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.300 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.300 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.300 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.300 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.300 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.300 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.300 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.300 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.300 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.300 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.300 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.300 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.300 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.300 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.300 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.300 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.300 [54/268] Linking static target lib/librte_meter.a 00:01:37.300 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.300 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.300 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.300 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.300 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.300 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.300 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.300 [62/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.300 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.300 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.300 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.300 [66/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.300 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.300 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.300 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.300 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.300 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.300 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.300 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.300 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.559 [75/268] Linking static target lib/librte_telemetry.a 00:01:37.559 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.559 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.559 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.559 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.559 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.559 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:37.559 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.559 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.559 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.559 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.559 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.559 [87/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.559 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.559 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.559 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.559 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.559 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.559 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.559 [94/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.559 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.559 [96/268] Linking static target lib/librte_ring.a 00:01:37.559 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.559 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.559 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.559 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.559 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.559 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.559 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.559 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.559 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.559 [106/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.559 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.559 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.559 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.559 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:37.559 [111/268] Linking static target lib/librte_mempool.a 00:01:37.559 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:37.559 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.559 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.559 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.559 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:37.559 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.559 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.559 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.559 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.559 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.559 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.559 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.559 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.560 [125/268] Linking static target lib/librte_rcu.a 00:01:37.560 [126/268] Linking static target lib/librte_net.a 00:01:37.560 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.560 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.560 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.560 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.560 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.560 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.818 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.818 [134/268] Linking static target lib/librte_eal.a 00:01:37.818 [135/268] Linking static target lib/librte_cmdline.a 00:01:37.818 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.818 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:37.818 [138/268] Linking static target lib/librte_mbuf.a 00:01:37.818 [139/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.818 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.818 [141/268] Linking static target lib/librte_timer.a 00:01:37.818 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.818 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.818 [144/268] Linking target lib/librte_log.so.24.1 00:01:37.818 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:37.818 [146/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.818 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.818 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.818 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.818 [150/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.818 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.818 [152/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.818 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.818 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.818 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.818 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.818 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.818 [158/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.818 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.818 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:37.818 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:37.818 [162/268] Linking static target lib/librte_dmadev.a 00:01:37.818 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.818 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:37.818 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.818 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.818 [167/268] Linking target lib/librte_telemetry.so.24.1 00:01:37.818 [168/268] Linking target lib/librte_kvargs.so.24.1 00:01:37.818 [169/268] Linking static target lib/librte_compressdev.a 00:01:37.818 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:37.818 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:37.818 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:37.818 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.819 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.819 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:37.819 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.819 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.819 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:37.819 [179/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.819 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.078 [181/268] Linking static target lib/librte_power.a 00:01:38.078 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.078 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.078 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.078 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.078 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.078 [187/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.078 [188/268] Linking static target lib/librte_security.a 00:01:38.078 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.078 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:38.078 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:38.078 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.078 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.078 [194/268] Linking static target lib/librte_reorder.a 00:01:38.078 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.078 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.078 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.078 [198/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.078 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.078 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.078 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.078 [202/268] Linking static target lib/librte_hash.a 00:01:38.078 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:38.078 [204/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.078 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.078 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.078 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.337 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.337 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.337 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.337 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:38.337 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:38.337 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.337 [214/268] Linking static target lib/librte_cryptodev.a 00:01:38.337 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:38.596 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [222/268] Linking static target lib/librte_ethdev.a 00:01:38.854 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:38.854 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.854 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.854 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.113 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.680 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:39.939 [229/268] Linking static target lib/librte_vhost.a 00:01:40.197 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.573 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.845 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.783 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.783 [234/268] Linking target lib/librte_eal.so.24.1 00:01:48.042 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:48.042 [236/268] Linking target lib/librte_ring.so.24.1 00:01:48.042 [237/268] Linking target lib/librte_meter.so.24.1 00:01:48.042 [238/268] Linking target lib/librte_pci.so.24.1 00:01:48.042 [239/268] Linking target lib/librte_timer.so.24.1 00:01:48.042 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:48.042 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:48.042 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:48.042 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:48.042 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:48.042 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:48.042 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:48.042 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:48.042 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:48.042 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:48.301 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:48.301 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:48.301 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:48.301 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:48.560 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.560 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:48.560 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:48.560 [257/268] Linking target lib/librte_net.so.24.1 00:01:48.560 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:48.560 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:48.560 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:48.818 [261/268] Linking target lib/librte_hash.so.24.1 00:01:48.818 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:48.819 [263/268] Linking target lib/librte_security.so.24.1 00:01:48.819 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:48.819 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:48.819 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:48.819 [267/268] Linking target lib/librte_power.so.24.1 00:01:48.819 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:48.819 INFO: autodetecting backend as ninja 00:01:48.819 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:01.047 CC lib/ut_mock/mock.o 00:02:01.047 CC lib/log/log.o 00:02:01.047 CC lib/ut/ut.o 00:02:01.047 CC lib/log/log_flags.o 00:02:01.047 CC lib/log/log_deprecated.o 00:02:01.047 LIB libspdk_ut.a 00:02:01.047 LIB libspdk_ut_mock.a 00:02:01.047 LIB libspdk_log.a 00:02:01.047 SO libspdk_ut.so.2.0 00:02:01.047 SO libspdk_ut_mock.so.6.0 00:02:01.047 SO libspdk_log.so.7.1 00:02:01.047 SYMLINK libspdk_ut.so 00:02:01.047 SYMLINK libspdk_ut_mock.so 00:02:01.047 SYMLINK libspdk_log.so 00:02:01.047 CC lib/util/base64.o 00:02:01.047 CC lib/util/bit_array.o 00:02:01.047 CC lib/util/cpuset.o 00:02:01.047 CC lib/util/crc16.o 00:02:01.047 CC lib/util/crc32.o 00:02:01.047 CC lib/util/crc32c.o 00:02:01.047 CC lib/util/crc64.o 00:02:01.047 CC lib/util/crc32_ieee.o 00:02:01.047 CC lib/ioat/ioat.o 00:02:01.047 CC lib/dma/dma.o 00:02:01.047 CC lib/util/dif.o 00:02:01.047 CXX lib/trace_parser/trace.o 00:02:01.047 CC lib/util/fd.o 00:02:01.047 CC lib/util/fd_group.o 00:02:01.047 CC lib/util/file.o 00:02:01.047 CC lib/util/hexlify.o 00:02:01.047 CC lib/util/iov.o 00:02:01.047 CC lib/util/math.o 00:02:01.047 CC lib/util/pipe.o 00:02:01.047 CC lib/util/net.o 00:02:01.047 CC lib/util/strerror_tls.o 00:02:01.047 CC lib/util/string.o 00:02:01.047 CC lib/util/uuid.o 00:02:01.047 CC lib/util/xor.o 00:02:01.047 CC lib/util/zipf.o 00:02:01.047 CC lib/util/md5.o 00:02:01.047 CC lib/vfio_user/host/vfio_user_pci.o 00:02:01.047 CC lib/vfio_user/host/vfio_user.o 00:02:01.047 LIB libspdk_dma.a 00:02:01.047 SO libspdk_dma.so.5.0 00:02:01.047 LIB libspdk_ioat.a 00:02:01.047 SYMLINK libspdk_dma.so 00:02:01.047 SO libspdk_ioat.so.7.0 00:02:01.047 LIB libspdk_vfio_user.a 00:02:01.047 SYMLINK libspdk_ioat.so 00:02:01.047 SO libspdk_vfio_user.so.5.0 00:02:01.047 LIB libspdk_util.a 00:02:01.047 SYMLINK libspdk_vfio_user.so 00:02:01.047 SO libspdk_util.so.10.1 00:02:01.047 SYMLINK libspdk_util.so 00:02:01.047 LIB libspdk_trace_parser.a 00:02:01.047 SO libspdk_trace_parser.so.6.0 00:02:01.047 SYMLINK libspdk_trace_parser.so 00:02:01.047 CC lib/conf/conf.o 00:02:01.047 CC lib/json/json_parse.o 00:02:01.047 CC lib/json/json_util.o 00:02:01.047 CC lib/idxd/idxd.o 00:02:01.047 CC lib/json/json_write.o 00:02:01.047 CC lib/idxd/idxd_user.o 00:02:01.047 CC lib/idxd/idxd_kernel.o 00:02:01.047 CC lib/env_dpdk/env.o 00:02:01.047 CC lib/vmd/vmd.o 00:02:01.047 CC lib/rdma_utils/rdma_utils.o 00:02:01.047 CC lib/env_dpdk/memory.o 00:02:01.047 CC lib/vmd/led.o 00:02:01.047 CC lib/env_dpdk/pci.o 00:02:01.047 CC lib/env_dpdk/init.o 00:02:01.047 CC lib/env_dpdk/threads.o 00:02:01.047 CC lib/env_dpdk/pci_ioat.o 00:02:01.047 CC lib/env_dpdk/pci_virtio.o 00:02:01.047 CC lib/env_dpdk/pci_vmd.o 00:02:01.047 CC lib/env_dpdk/pci_idxd.o 00:02:01.047 CC lib/env_dpdk/pci_event.o 00:02:01.047 CC lib/env_dpdk/sigbus_handler.o 00:02:01.047 CC lib/env_dpdk/pci_dpdk.o 00:02:01.047 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.047 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.047 LIB libspdk_conf.a 00:02:01.047 SO libspdk_conf.so.6.0 00:02:01.047 LIB libspdk_rdma_utils.a 00:02:01.047 LIB libspdk_json.a 00:02:01.047 SO libspdk_rdma_utils.so.1.0 00:02:01.306 SYMLINK libspdk_conf.so 00:02:01.306 SO libspdk_json.so.6.0 00:02:01.306 SYMLINK libspdk_rdma_utils.so 00:02:01.306 SYMLINK libspdk_json.so 00:02:01.306 LIB libspdk_idxd.a 00:02:01.306 SO libspdk_idxd.so.12.1 00:02:01.306 LIB libspdk_vmd.a 00:02:01.565 SO libspdk_vmd.so.6.0 00:02:01.565 SYMLINK libspdk_idxd.so 00:02:01.565 SYMLINK libspdk_vmd.so 00:02:01.565 CC lib/rdma_provider/common.o 00:02:01.565 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:01.565 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.565 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.565 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.565 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.824 LIB libspdk_rdma_provider.a 00:02:01.824 SO libspdk_rdma_provider.so.7.0 00:02:01.824 LIB libspdk_jsonrpc.a 00:02:01.824 SO libspdk_jsonrpc.so.6.0 00:02:01.824 SYMLINK libspdk_rdma_provider.so 00:02:01.824 SYMLINK libspdk_jsonrpc.so 00:02:01.824 LIB libspdk_env_dpdk.a 00:02:02.082 SO libspdk_env_dpdk.so.15.1 00:02:02.082 SYMLINK libspdk_env_dpdk.so 00:02:02.082 CC lib/rpc/rpc.o 00:02:02.340 LIB libspdk_rpc.a 00:02:02.340 SO libspdk_rpc.so.6.0 00:02:02.340 SYMLINK libspdk_rpc.so 00:02:02.909 CC lib/notify/notify.o 00:02:02.909 CC lib/notify/notify_rpc.o 00:02:02.909 CC lib/keyring/keyring.o 00:02:02.909 CC lib/trace/trace.o 00:02:02.909 CC lib/keyring/keyring_rpc.o 00:02:02.909 CC lib/trace/trace_flags.o 00:02:02.909 CC lib/trace/trace_rpc.o 00:02:02.909 LIB libspdk_notify.a 00:02:02.909 SO libspdk_notify.so.6.0 00:02:02.909 LIB libspdk_trace.a 00:02:02.909 LIB libspdk_keyring.a 00:02:02.909 SYMLINK libspdk_notify.so 00:02:02.909 SO libspdk_keyring.so.2.0 00:02:02.909 SO libspdk_trace.so.11.0 00:02:03.168 SYMLINK libspdk_keyring.so 00:02:03.168 SYMLINK libspdk_trace.so 00:02:03.428 CC lib/thread/thread.o 00:02:03.428 CC lib/thread/iobuf.o 00:02:03.428 CC lib/sock/sock.o 00:02:03.428 CC lib/sock/sock_rpc.o 00:02:03.688 LIB libspdk_sock.a 00:02:03.688 SO libspdk_sock.so.10.0 00:02:03.947 SYMLINK libspdk_sock.so 00:02:04.205 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:04.205 CC lib/nvme/nvme_ctrlr.o 00:02:04.205 CC lib/nvme/nvme_fabric.o 00:02:04.205 CC lib/nvme/nvme_ns_cmd.o 00:02:04.205 CC lib/nvme/nvme_ns.o 00:02:04.205 CC lib/nvme/nvme_pcie_common.o 00:02:04.205 CC lib/nvme/nvme_pcie.o 00:02:04.205 CC lib/nvme/nvme_qpair.o 00:02:04.205 CC lib/nvme/nvme.o 00:02:04.205 CC lib/nvme/nvme_quirks.o 00:02:04.205 CC lib/nvme/nvme_transport.o 00:02:04.205 CC lib/nvme/nvme_discovery.o 00:02:04.205 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:04.205 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:04.205 CC lib/nvme/nvme_tcp.o 00:02:04.205 CC lib/nvme/nvme_opal.o 00:02:04.205 CC lib/nvme/nvme_io_msg.o 00:02:04.205 CC lib/nvme/nvme_poll_group.o 00:02:04.205 CC lib/nvme/nvme_zns.o 00:02:04.205 CC lib/nvme/nvme_stubs.o 00:02:04.205 CC lib/nvme/nvme_auth.o 00:02:04.205 CC lib/nvme/nvme_cuse.o 00:02:04.205 CC lib/nvme/nvme_vfio_user.o 00:02:04.205 CC lib/nvme/nvme_rdma.o 00:02:04.464 LIB libspdk_thread.a 00:02:04.464 SO libspdk_thread.so.11.0 00:02:04.464 SYMLINK libspdk_thread.so 00:02:04.722 CC lib/vfu_tgt/tgt_endpoint.o 00:02:04.722 CC lib/vfu_tgt/tgt_rpc.o 00:02:04.722 CC lib/virtio/virtio.o 00:02:04.722 CC lib/virtio/virtio_vfio_user.o 00:02:04.982 CC lib/virtio/virtio_vhost_user.o 00:02:04.982 CC lib/virtio/virtio_pci.o 00:02:04.982 CC lib/accel/accel.o 00:02:04.982 CC lib/accel/accel_rpc.o 00:02:04.982 CC lib/accel/accel_sw.o 00:02:04.982 CC lib/fsdev/fsdev.o 00:02:04.982 CC lib/fsdev/fsdev_io.o 00:02:04.982 CC lib/fsdev/fsdev_rpc.o 00:02:04.982 CC lib/init/json_config.o 00:02:04.982 CC lib/init/subsystem.o 00:02:04.982 CC lib/init/subsystem_rpc.o 00:02:04.982 CC lib/init/rpc.o 00:02:04.982 CC lib/blob/blobstore.o 00:02:04.982 CC lib/blob/request.o 00:02:04.982 CC lib/blob/zeroes.o 00:02:04.982 CC lib/blob/blob_bs_dev.o 00:02:04.982 LIB libspdk_init.a 00:02:05.240 LIB libspdk_vfu_tgt.a 00:02:05.240 SO libspdk_init.so.6.0 00:02:05.240 LIB libspdk_virtio.a 00:02:05.240 SO libspdk_vfu_tgt.so.3.0 00:02:05.240 SO libspdk_virtio.so.7.0 00:02:05.240 SYMLINK libspdk_init.so 00:02:05.240 SYMLINK libspdk_vfu_tgt.so 00:02:05.240 SYMLINK libspdk_virtio.so 00:02:05.498 LIB libspdk_fsdev.a 00:02:05.498 SO libspdk_fsdev.so.2.0 00:02:05.498 SYMLINK libspdk_fsdev.so 00:02:05.498 CC lib/event/app.o 00:02:05.498 CC lib/event/reactor.o 00:02:05.498 CC lib/event/scheduler_static.o 00:02:05.498 CC lib/event/log_rpc.o 00:02:05.498 CC lib/event/app_rpc.o 00:02:05.757 LIB libspdk_accel.a 00:02:05.757 SO libspdk_accel.so.16.0 00:02:05.757 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:05.757 SYMLINK libspdk_accel.so 00:02:05.757 LIB libspdk_nvme.a 00:02:05.757 LIB libspdk_event.a 00:02:06.016 SO libspdk_event.so.14.0 00:02:06.016 SO libspdk_nvme.so.15.0 00:02:06.016 SYMLINK libspdk_event.so 00:02:06.016 CC lib/bdev/bdev.o 00:02:06.016 CC lib/bdev/bdev_rpc.o 00:02:06.016 CC lib/bdev/bdev_zone.o 00:02:06.016 CC lib/bdev/part.o 00:02:06.016 CC lib/bdev/scsi_nvme.o 00:02:06.016 SYMLINK libspdk_nvme.so 00:02:06.275 LIB libspdk_fuse_dispatcher.a 00:02:06.275 SO libspdk_fuse_dispatcher.so.1.0 00:02:06.275 SYMLINK libspdk_fuse_dispatcher.so 00:02:07.213 LIB libspdk_blob.a 00:02:07.213 SO libspdk_blob.so.11.0 00:02:07.213 SYMLINK libspdk_blob.so 00:02:07.472 CC lib/lvol/lvol.o 00:02:07.472 CC lib/blobfs/blobfs.o 00:02:07.472 CC lib/blobfs/tree.o 00:02:08.041 LIB libspdk_bdev.a 00:02:08.041 SO libspdk_bdev.so.17.0 00:02:08.041 LIB libspdk_blobfs.a 00:02:08.041 SO libspdk_blobfs.so.10.0 00:02:08.041 SYMLINK libspdk_bdev.so 00:02:08.041 LIB libspdk_lvol.a 00:02:08.041 SYMLINK libspdk_blobfs.so 00:02:08.041 SO libspdk_lvol.so.10.0 00:02:08.300 SYMLINK libspdk_lvol.so 00:02:08.300 CC lib/nbd/nbd.o 00:02:08.300 CC lib/nbd/nbd_rpc.o 00:02:08.300 CC lib/scsi/dev.o 00:02:08.300 CC lib/scsi/lun.o 00:02:08.300 CC lib/scsi/port.o 00:02:08.300 CC lib/scsi/scsi.o 00:02:08.300 CC lib/scsi/scsi_bdev.o 00:02:08.300 CC lib/ublk/ublk.o 00:02:08.300 CC lib/scsi/scsi_pr.o 00:02:08.300 CC lib/ublk/ublk_rpc.o 00:02:08.300 CC lib/nvmf/ctrlr.o 00:02:08.300 CC lib/scsi/scsi_rpc.o 00:02:08.300 CC lib/scsi/task.o 00:02:08.300 CC lib/nvmf/ctrlr_discovery.o 00:02:08.300 CC lib/nvmf/ctrlr_bdev.o 00:02:08.300 CC lib/nvmf/subsystem.o 00:02:08.300 CC lib/ftl/ftl_core.o 00:02:08.300 CC lib/nvmf/nvmf.o 00:02:08.300 CC lib/ftl/ftl_init.o 00:02:08.300 CC lib/nvmf/nvmf_rpc.o 00:02:08.300 CC lib/ftl/ftl_layout.o 00:02:08.300 CC lib/nvmf/transport.o 00:02:08.300 CC lib/ftl/ftl_debug.o 00:02:08.300 CC lib/ftl/ftl_io.o 00:02:08.300 CC lib/nvmf/tcp.o 00:02:08.300 CC lib/nvmf/stubs.o 00:02:08.300 CC lib/ftl/ftl_sb.o 00:02:08.300 CC lib/nvmf/mdns_server.o 00:02:08.300 CC lib/ftl/ftl_l2p.o 00:02:08.300 CC lib/nvmf/vfio_user.o 00:02:08.300 CC lib/ftl/ftl_l2p_flat.o 00:02:08.300 CC lib/nvmf/rdma.o 00:02:08.300 CC lib/ftl/ftl_nv_cache.o 00:02:08.300 CC lib/nvmf/auth.o 00:02:08.300 CC lib/ftl/ftl_band.o 00:02:08.300 CC lib/ftl/ftl_band_ops.o 00:02:08.300 CC lib/ftl/ftl_writer.o 00:02:08.300 CC lib/ftl/ftl_rq.o 00:02:08.300 CC lib/ftl/ftl_reloc.o 00:02:08.300 CC lib/ftl/ftl_l2p_cache.o 00:02:08.300 CC lib/ftl/ftl_p2l.o 00:02:08.300 CC lib/ftl/ftl_p2l_log.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.300 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.559 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.559 CC lib/ftl/utils/ftl_conf.o 00:02:08.559 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.559 CC lib/ftl/utils/ftl_md.o 00:02:08.559 CC lib/ftl/utils/ftl_mempool.o 00:02:08.559 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.559 CC lib/ftl/utils/ftl_property.o 00:02:08.559 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.559 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.559 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.559 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.559 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.559 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.559 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.559 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.559 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.559 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.559 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:08.559 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:08.559 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.559 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.559 CC lib/ftl/base/ftl_base_dev.o 00:02:08.559 CC lib/ftl/ftl_trace.o 00:02:09.125 LIB libspdk_nbd.a 00:02:09.125 SO libspdk_nbd.so.7.0 00:02:09.125 LIB libspdk_scsi.a 00:02:09.125 LIB libspdk_ublk.a 00:02:09.125 SYMLINK libspdk_nbd.so 00:02:09.125 SO libspdk_scsi.so.9.0 00:02:09.125 SO libspdk_ublk.so.3.0 00:02:09.125 SYMLINK libspdk_scsi.so 00:02:09.125 SYMLINK libspdk_ublk.so 00:02:09.384 LIB libspdk_ftl.a 00:02:09.384 SO libspdk_ftl.so.9.0 00:02:09.384 CC lib/vhost/vhost.o 00:02:09.384 CC lib/vhost/vhost_rpc.o 00:02:09.384 CC lib/vhost/vhost_scsi.o 00:02:09.384 CC lib/vhost/vhost_blk.o 00:02:09.384 CC lib/vhost/rte_vhost_user.o 00:02:09.384 CC lib/iscsi/conn.o 00:02:09.384 CC lib/iscsi/init_grp.o 00:02:09.384 CC lib/iscsi/iscsi.o 00:02:09.657 CC lib/iscsi/param.o 00:02:09.657 CC lib/iscsi/portal_grp.o 00:02:09.657 CC lib/iscsi/tgt_node.o 00:02:09.657 CC lib/iscsi/iscsi_subsystem.o 00:02:09.657 CC lib/iscsi/iscsi_rpc.o 00:02:09.657 CC lib/iscsi/task.o 00:02:09.657 SYMLINK libspdk_ftl.so 00:02:10.224 LIB libspdk_nvmf.a 00:02:10.224 SO libspdk_nvmf.so.20.0 00:02:10.224 LIB libspdk_vhost.a 00:02:10.225 SO libspdk_vhost.so.8.0 00:02:10.484 SYMLINK libspdk_vhost.so 00:02:10.484 SYMLINK libspdk_nvmf.so 00:02:10.484 LIB libspdk_iscsi.a 00:02:10.484 SO libspdk_iscsi.so.8.0 00:02:10.743 SYMLINK libspdk_iscsi.so 00:02:11.370 CC module/vfu_device/vfu_virtio.o 00:02:11.370 CC module/vfu_device/vfu_virtio_blk.o 00:02:11.370 CC module/vfu_device/vfu_virtio_rpc.o 00:02:11.370 CC module/vfu_device/vfu_virtio_scsi.o 00:02:11.370 CC module/vfu_device/vfu_virtio_fs.o 00:02:11.370 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.370 LIB libspdk_env_dpdk_rpc.a 00:02:11.370 CC module/blob/bdev/blob_bdev.o 00:02:11.370 CC module/accel/iaa/accel_iaa.o 00:02:11.370 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.370 CC module/keyring/linux/keyring.o 00:02:11.370 CC module/keyring/linux/keyring_rpc.o 00:02:11.370 CC module/keyring/file/keyring.o 00:02:11.370 CC module/keyring/file/keyring_rpc.o 00:02:11.370 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.370 CC module/accel/dsa/accel_dsa.o 00:02:11.370 CC module/accel/ioat/accel_ioat.o 00:02:11.370 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.370 CC module/accel/error/accel_error.o 00:02:11.370 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.370 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.370 CC module/accel/error/accel_error_rpc.o 00:02:11.370 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.370 CC module/sock/posix/posix.o 00:02:11.370 CC module/fsdev/aio/fsdev_aio.o 00:02:11.370 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:11.370 CC module/fsdev/aio/linux_aio_mgr.o 00:02:11.370 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.370 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.689 LIB libspdk_keyring_linux.a 00:02:11.689 LIB libspdk_keyring_file.a 00:02:11.689 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.689 LIB libspdk_scheduler_gscheduler.a 00:02:11.689 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.689 LIB libspdk_accel_iaa.a 00:02:11.689 SO libspdk_keyring_linux.so.1.0 00:02:11.689 LIB libspdk_scheduler_dynamic.a 00:02:11.689 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.689 SO libspdk_keyring_file.so.2.0 00:02:11.689 LIB libspdk_accel_ioat.a 00:02:11.689 LIB libspdk_accel_error.a 00:02:11.689 SO libspdk_accel_iaa.so.3.0 00:02:11.689 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.689 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.689 SO libspdk_accel_ioat.so.6.0 00:02:11.689 LIB libspdk_blob_bdev.a 00:02:11.689 SYMLINK libspdk_keyring_file.so 00:02:11.689 SO libspdk_accel_error.so.2.0 00:02:11.689 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.689 SYMLINK libspdk_keyring_linux.so 00:02:11.689 LIB libspdk_accel_dsa.a 00:02:11.689 SO libspdk_blob_bdev.so.11.0 00:02:11.689 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.689 SYMLINK libspdk_accel_iaa.so 00:02:11.689 SO libspdk_accel_dsa.so.5.0 00:02:11.689 SYMLINK libspdk_accel_ioat.so 00:02:11.690 SYMLINK libspdk_accel_error.so 00:02:11.690 SYMLINK libspdk_blob_bdev.so 00:02:11.690 SYMLINK libspdk_accel_dsa.so 00:02:11.690 LIB libspdk_vfu_device.a 00:02:11.690 SO libspdk_vfu_device.so.3.0 00:02:11.969 SYMLINK libspdk_vfu_device.so 00:02:11.969 LIB libspdk_fsdev_aio.a 00:02:11.969 SO libspdk_fsdev_aio.so.1.0 00:02:11.969 LIB libspdk_sock_posix.a 00:02:11.969 SO libspdk_sock_posix.so.6.0 00:02:11.969 SYMLINK libspdk_fsdev_aio.so 00:02:11.969 SYMLINK libspdk_sock_posix.so 00:02:12.226 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.226 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.226 CC module/bdev/malloc/bdev_malloc.o 00:02:12.226 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.226 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.226 CC module/bdev/gpt/gpt.o 00:02:12.226 CC module/bdev/null/bdev_null.o 00:02:12.226 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.226 CC module/bdev/delay/vbdev_delay.o 00:02:12.226 CC module/bdev/null/bdev_null_rpc.o 00:02:12.226 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.226 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.226 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.226 CC module/bdev/ftl/bdev_ftl.o 00:02:12.226 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.226 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.226 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.226 CC module/bdev/error/vbdev_error.o 00:02:12.226 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.226 CC module/bdev/split/vbdev_split.o 00:02:12.226 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.226 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.226 CC module/bdev/raid/bdev_raid.o 00:02:12.226 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.226 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.226 CC module/bdev/raid/raid0.o 00:02:12.226 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.226 CC module/bdev/aio/bdev_aio.o 00:02:12.226 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.226 CC module/bdev/raid/raid1.o 00:02:12.226 CC module/bdev/raid/concat.o 00:02:12.226 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.226 CC module/bdev/nvme/bdev_nvme.o 00:02:12.226 CC module/bdev/nvme/nvme_rpc.o 00:02:12.226 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.226 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.226 CC module/bdev/nvme/vbdev_opal.o 00:02:12.226 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.226 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.226 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.226 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.226 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.226 LIB libspdk_blobfs_bdev.a 00:02:12.484 SO libspdk_blobfs_bdev.so.6.0 00:02:12.484 LIB libspdk_bdev_split.a 00:02:12.484 SYMLINK libspdk_blobfs_bdev.so 00:02:12.484 SO libspdk_bdev_split.so.6.0 00:02:12.484 LIB libspdk_bdev_error.a 00:02:12.484 LIB libspdk_bdev_ftl.a 00:02:12.484 LIB libspdk_bdev_gpt.a 00:02:12.484 LIB libspdk_bdev_null.a 00:02:12.484 LIB libspdk_bdev_passthru.a 00:02:12.484 SYMLINK libspdk_bdev_split.so 00:02:12.484 SO libspdk_bdev_error.so.6.0 00:02:12.484 SO libspdk_bdev_ftl.so.6.0 00:02:12.484 LIB libspdk_bdev_malloc.a 00:02:12.484 SO libspdk_bdev_gpt.so.6.0 00:02:12.484 SO libspdk_bdev_passthru.so.6.0 00:02:12.484 LIB libspdk_bdev_delay.a 00:02:12.484 SO libspdk_bdev_null.so.6.0 00:02:12.484 LIB libspdk_bdev_zone_block.a 00:02:12.484 LIB libspdk_bdev_aio.a 00:02:12.484 SO libspdk_bdev_malloc.so.6.0 00:02:12.484 SYMLINK libspdk_bdev_ftl.so 00:02:12.484 SO libspdk_bdev_delay.so.6.0 00:02:12.484 SYMLINK libspdk_bdev_error.so 00:02:12.484 SO libspdk_bdev_zone_block.so.6.0 00:02:12.484 SO libspdk_bdev_aio.so.6.0 00:02:12.484 SYMLINK libspdk_bdev_gpt.so 00:02:12.484 SYMLINK libspdk_bdev_passthru.so 00:02:12.484 LIB libspdk_bdev_iscsi.a 00:02:12.484 SYMLINK libspdk_bdev_null.so 00:02:12.484 SYMLINK libspdk_bdev_malloc.so 00:02:12.484 SO libspdk_bdev_iscsi.so.6.0 00:02:12.743 SYMLINK libspdk_bdev_zone_block.so 00:02:12.743 SYMLINK libspdk_bdev_aio.so 00:02:12.743 SYMLINK libspdk_bdev_delay.so 00:02:12.743 LIB libspdk_bdev_virtio.a 00:02:12.743 SYMLINK libspdk_bdev_iscsi.so 00:02:12.743 SO libspdk_bdev_virtio.so.6.0 00:02:12.743 LIB libspdk_bdev_lvol.a 00:02:12.743 SO libspdk_bdev_lvol.so.6.0 00:02:12.743 SYMLINK libspdk_bdev_virtio.so 00:02:12.743 SYMLINK libspdk_bdev_lvol.so 00:02:13.002 LIB libspdk_bdev_raid.a 00:02:13.002 SO libspdk_bdev_raid.so.6.0 00:02:13.002 SYMLINK libspdk_bdev_raid.so 00:02:13.938 LIB libspdk_bdev_nvme.a 00:02:14.198 SO libspdk_bdev_nvme.so.7.1 00:02:14.198 SYMLINK libspdk_bdev_nvme.so 00:02:14.766 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.766 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.766 CC module/event/subsystems/vmd/vmd.o 00:02:14.766 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.766 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.766 CC module/event/subsystems/keyring/keyring.o 00:02:14.766 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.766 CC module/event/subsystems/sock/sock.o 00:02:14.766 CC module/event/subsystems/fsdev/fsdev.o 00:02:14.766 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:15.025 LIB libspdk_event_vhost_blk.a 00:02:15.025 LIB libspdk_event_vmd.a 00:02:15.025 LIB libspdk_event_vfu_tgt.a 00:02:15.025 LIB libspdk_event_scheduler.a 00:02:15.025 LIB libspdk_event_fsdev.a 00:02:15.025 LIB libspdk_event_keyring.a 00:02:15.025 LIB libspdk_event_iobuf.a 00:02:15.025 LIB libspdk_event_sock.a 00:02:15.025 SO libspdk_event_vhost_blk.so.3.0 00:02:15.025 SO libspdk_event_vfu_tgt.so.3.0 00:02:15.025 SO libspdk_event_vmd.so.6.0 00:02:15.025 SO libspdk_event_scheduler.so.4.0 00:02:15.025 SO libspdk_event_keyring.so.1.0 00:02:15.025 SO libspdk_event_fsdev.so.1.0 00:02:15.025 SO libspdk_event_iobuf.so.3.0 00:02:15.025 SO libspdk_event_sock.so.5.0 00:02:15.025 SYMLINK libspdk_event_vmd.so 00:02:15.025 SYMLINK libspdk_event_vhost_blk.so 00:02:15.025 SYMLINK libspdk_event_scheduler.so 00:02:15.025 SYMLINK libspdk_event_vfu_tgt.so 00:02:15.025 SYMLINK libspdk_event_fsdev.so 00:02:15.025 SYMLINK libspdk_event_keyring.so 00:02:15.025 SYMLINK libspdk_event_sock.so 00:02:15.025 SYMLINK libspdk_event_iobuf.so 00:02:15.284 CC module/event/subsystems/accel/accel.o 00:02:15.542 LIB libspdk_event_accel.a 00:02:15.542 SO libspdk_event_accel.so.6.0 00:02:15.542 SYMLINK libspdk_event_accel.so 00:02:16.108 CC module/event/subsystems/bdev/bdev.o 00:02:16.108 LIB libspdk_event_bdev.a 00:02:16.108 SO libspdk_event_bdev.so.6.0 00:02:16.108 SYMLINK libspdk_event_bdev.so 00:02:16.674 CC module/event/subsystems/ublk/ublk.o 00:02:16.674 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.674 CC module/event/subsystems/scsi/scsi.o 00:02:16.674 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.674 CC module/event/subsystems/nbd/nbd.o 00:02:16.674 LIB libspdk_event_ublk.a 00:02:16.674 LIB libspdk_event_nbd.a 00:02:16.674 LIB libspdk_event_scsi.a 00:02:16.674 SO libspdk_event_ublk.so.3.0 00:02:16.674 SO libspdk_event_nbd.so.6.0 00:02:16.674 SO libspdk_event_scsi.so.6.0 00:02:16.674 LIB libspdk_event_nvmf.a 00:02:16.674 SYMLINK libspdk_event_ublk.so 00:02:16.674 SYMLINK libspdk_event_nbd.so 00:02:16.674 SYMLINK libspdk_event_scsi.so 00:02:16.674 SO libspdk_event_nvmf.so.6.0 00:02:16.933 SYMLINK libspdk_event_nvmf.so 00:02:17.192 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.192 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.192 LIB libspdk_event_vhost_scsi.a 00:02:17.192 LIB libspdk_event_iscsi.a 00:02:17.192 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.192 SO libspdk_event_iscsi.so.6.0 00:02:17.192 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.451 SYMLINK libspdk_event_iscsi.so 00:02:17.451 SO libspdk.so.6.0 00:02:17.451 SYMLINK libspdk.so 00:02:17.710 CXX app/trace/trace.o 00:02:17.971 CC app/spdk_top/spdk_top.o 00:02:17.971 CC test/rpc_client/rpc_client_test.o 00:02:17.971 CC app/spdk_nvme_identify/identify.o 00:02:17.971 CC app/trace_record/trace_record.o 00:02:17.971 CC app/spdk_nvme_perf/perf.o 00:02:17.971 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.971 CC app/spdk_lspci/spdk_lspci.o 00:02:17.971 TEST_HEADER include/spdk/accel.h 00:02:17.971 TEST_HEADER include/spdk/accel_module.h 00:02:17.971 TEST_HEADER include/spdk/barrier.h 00:02:17.971 TEST_HEADER include/spdk/assert.h 00:02:17.971 TEST_HEADER include/spdk/bdev_module.h 00:02:17.971 TEST_HEADER include/spdk/base64.h 00:02:17.971 TEST_HEADER include/spdk/bdev.h 00:02:17.971 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.971 TEST_HEADER include/spdk/bit_array.h 00:02:17.971 TEST_HEADER include/spdk/bit_pool.h 00:02:17.971 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.971 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.971 TEST_HEADER include/spdk/blob.h 00:02:17.971 TEST_HEADER include/spdk/blobfs.h 00:02:17.971 TEST_HEADER include/spdk/conf.h 00:02:17.971 TEST_HEADER include/spdk/config.h 00:02:17.971 TEST_HEADER include/spdk/crc16.h 00:02:17.971 TEST_HEADER include/spdk/cpuset.h 00:02:17.971 TEST_HEADER include/spdk/crc32.h 00:02:17.971 TEST_HEADER include/spdk/crc64.h 00:02:17.971 TEST_HEADER include/spdk/dif.h 00:02:17.971 TEST_HEADER include/spdk/endian.h 00:02:17.971 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.971 TEST_HEADER include/spdk/dma.h 00:02:17.971 TEST_HEADER include/spdk/env.h 00:02:17.971 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.971 TEST_HEADER include/spdk/event.h 00:02:17.971 TEST_HEADER include/spdk/fd.h 00:02:17.971 TEST_HEADER include/spdk/fd_group.h 00:02:17.971 TEST_HEADER include/spdk/file.h 00:02:17.971 TEST_HEADER include/spdk/fsdev.h 00:02:17.971 TEST_HEADER include/spdk/ftl.h 00:02:17.971 TEST_HEADER include/spdk/fsdev_module.h 00:02:17.971 CC app/spdk_dd/spdk_dd.o 00:02:17.971 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:17.971 TEST_HEADER include/spdk/hexlify.h 00:02:17.971 TEST_HEADER include/spdk/idxd.h 00:02:17.971 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.971 TEST_HEADER include/spdk/histogram_data.h 00:02:17.971 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.971 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.971 TEST_HEADER include/spdk/ioat.h 00:02:17.971 TEST_HEADER include/spdk/init.h 00:02:17.971 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.971 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.971 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.971 TEST_HEADER include/spdk/keyring.h 00:02:17.971 TEST_HEADER include/spdk/json.h 00:02:17.971 CC app/nvmf_tgt/nvmf_main.o 00:02:17.971 TEST_HEADER include/spdk/keyring_module.h 00:02:17.971 TEST_HEADER include/spdk/log.h 00:02:17.971 TEST_HEADER include/spdk/likely.h 00:02:17.971 TEST_HEADER include/spdk/lvol.h 00:02:17.971 TEST_HEADER include/spdk/md5.h 00:02:17.971 TEST_HEADER include/spdk/memory.h 00:02:17.971 TEST_HEADER include/spdk/nbd.h 00:02:17.971 TEST_HEADER include/spdk/mmio.h 00:02:17.971 TEST_HEADER include/spdk/net.h 00:02:17.971 TEST_HEADER include/spdk/nvme.h 00:02:17.971 TEST_HEADER include/spdk/notify.h 00:02:17.971 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.971 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.971 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.971 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.971 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.971 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.971 TEST_HEADER include/spdk/nvmf.h 00:02:17.971 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.971 CC app/spdk_tgt/spdk_tgt.o 00:02:17.971 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.971 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.971 TEST_HEADER include/spdk/opal.h 00:02:17.971 TEST_HEADER include/spdk/opal_spec.h 00:02:17.971 TEST_HEADER include/spdk/pipe.h 00:02:17.971 TEST_HEADER include/spdk/queue.h 00:02:17.971 TEST_HEADER include/spdk/pci_ids.h 00:02:17.971 TEST_HEADER include/spdk/reduce.h 00:02:17.971 TEST_HEADER include/spdk/rpc.h 00:02:17.971 TEST_HEADER include/spdk/scheduler.h 00:02:17.971 TEST_HEADER include/spdk/scsi.h 00:02:17.971 TEST_HEADER include/spdk/sock.h 00:02:17.971 TEST_HEADER include/spdk/stdinc.h 00:02:17.971 TEST_HEADER include/spdk/string.h 00:02:17.971 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.971 TEST_HEADER include/spdk/thread.h 00:02:17.971 TEST_HEADER include/spdk/trace_parser.h 00:02:17.971 TEST_HEADER include/spdk/trace.h 00:02:17.971 TEST_HEADER include/spdk/ublk.h 00:02:17.971 TEST_HEADER include/spdk/tree.h 00:02:17.971 TEST_HEADER include/spdk/util.h 00:02:17.971 TEST_HEADER include/spdk/version.h 00:02:17.971 TEST_HEADER include/spdk/uuid.h 00:02:17.971 TEST_HEADER include/spdk/vhost.h 00:02:17.971 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.971 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.971 TEST_HEADER include/spdk/vmd.h 00:02:17.971 TEST_HEADER include/spdk/zipf.h 00:02:17.971 TEST_HEADER include/spdk/xor.h 00:02:17.971 CXX test/cpp_headers/accel.o 00:02:17.971 CXX test/cpp_headers/assert.o 00:02:17.971 CXX test/cpp_headers/accel_module.o 00:02:17.971 CXX test/cpp_headers/barrier.o 00:02:17.971 CXX test/cpp_headers/base64.o 00:02:17.971 CXX test/cpp_headers/bdev_module.o 00:02:17.972 CXX test/cpp_headers/bdev.o 00:02:17.972 CXX test/cpp_headers/bdev_zone.o 00:02:17.972 CXX test/cpp_headers/bit_array.o 00:02:17.972 CXX test/cpp_headers/bit_pool.o 00:02:17.972 CXX test/cpp_headers/blobfs.o 00:02:17.972 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.972 CXX test/cpp_headers/blob_bdev.o 00:02:17.972 CXX test/cpp_headers/conf.o 00:02:17.972 CXX test/cpp_headers/config.o 00:02:17.972 CXX test/cpp_headers/cpuset.o 00:02:17.972 CXX test/cpp_headers/blob.o 00:02:17.972 CXX test/cpp_headers/crc32.o 00:02:17.972 CXX test/cpp_headers/crc16.o 00:02:17.972 CXX test/cpp_headers/crc64.o 00:02:17.972 CXX test/cpp_headers/dif.o 00:02:17.972 CXX test/cpp_headers/dma.o 00:02:17.972 CXX test/cpp_headers/endian.o 00:02:17.972 CXX test/cpp_headers/env.o 00:02:17.972 CXX test/cpp_headers/env_dpdk.o 00:02:17.972 CXX test/cpp_headers/event.o 00:02:17.972 CXX test/cpp_headers/fd.o 00:02:17.972 CXX test/cpp_headers/fd_group.o 00:02:17.972 CXX test/cpp_headers/fsdev.o 00:02:17.972 CXX test/cpp_headers/file.o 00:02:17.972 CXX test/cpp_headers/fsdev_module.o 00:02:17.972 CXX test/cpp_headers/ftl.o 00:02:17.972 CXX test/cpp_headers/fuse_dispatcher.o 00:02:17.972 CXX test/cpp_headers/gpt_spec.o 00:02:17.972 CXX test/cpp_headers/hexlify.o 00:02:17.972 CXX test/cpp_headers/idxd.o 00:02:17.972 CXX test/cpp_headers/histogram_data.o 00:02:17.972 CXX test/cpp_headers/idxd_spec.o 00:02:17.972 CXX test/cpp_headers/init.o 00:02:17.972 CXX test/cpp_headers/ioat.o 00:02:17.972 CXX test/cpp_headers/ioat_spec.o 00:02:17.972 CXX test/cpp_headers/iscsi_spec.o 00:02:17.972 CXX test/cpp_headers/json.o 00:02:17.972 CXX test/cpp_headers/jsonrpc.o 00:02:17.972 CXX test/cpp_headers/keyring.o 00:02:17.972 CXX test/cpp_headers/keyring_module.o 00:02:17.972 CXX test/cpp_headers/likely.o 00:02:17.972 CXX test/cpp_headers/log.o 00:02:17.972 CXX test/cpp_headers/md5.o 00:02:17.972 CXX test/cpp_headers/lvol.o 00:02:17.972 CXX test/cpp_headers/mmio.o 00:02:17.972 CXX test/cpp_headers/memory.o 00:02:17.972 CXX test/cpp_headers/nbd.o 00:02:17.972 CXX test/cpp_headers/net.o 00:02:17.972 CXX test/cpp_headers/notify.o 00:02:17.972 CXX test/cpp_headers/nvme.o 00:02:17.972 CXX test/cpp_headers/nvme_intel.o 00:02:17.972 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.972 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.972 CXX test/cpp_headers/nvme_spec.o 00:02:17.972 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.972 CXX test/cpp_headers/nvme_zns.o 00:02:17.972 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.972 CXX test/cpp_headers/nvmf.o 00:02:17.972 CC examples/util/zipf/zipf.o 00:02:17.972 CXX test/cpp_headers/nvmf_spec.o 00:02:17.972 CXX test/cpp_headers/nvmf_transport.o 00:02:17.972 CC examples/ioat/perf/perf.o 00:02:17.972 CC test/env/memory/memory_ut.o 00:02:17.972 CXX test/cpp_headers/opal.o 00:02:17.972 CC test/thread/poller_perf/poller_perf.o 00:02:17.972 CC test/app/histogram_perf/histogram_perf.o 00:02:17.972 CC test/env/vtophys/vtophys.o 00:02:17.972 CC app/fio/nvme/fio_plugin.o 00:02:17.972 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.972 CC test/env/pci/pci_ut.o 00:02:17.972 CC test/app/jsoncat/jsoncat.o 00:02:17.972 CC examples/ioat/verify/verify.o 00:02:18.236 CC test/app/stub/stub.o 00:02:18.237 CC test/dma/test_dma/test_dma.o 00:02:18.237 CC app/fio/bdev/fio_plugin.o 00:02:18.237 CC test/app/bdev_svc/bdev_svc.o 00:02:18.237 LINK spdk_lspci 00:02:18.237 LINK interrupt_tgt 00:02:18.237 LINK rpc_client_test 00:02:18.237 LINK spdk_nvme_discover 00:02:18.502 LINK nvmf_tgt 00:02:18.502 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.502 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.502 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.502 LINK spdk_tgt 00:02:18.502 LINK spdk_trace_record 00:02:18.502 LINK zipf 00:02:18.502 CXX test/cpp_headers/opal_spec.o 00:02:18.502 CXX test/cpp_headers/pci_ids.o 00:02:18.502 CXX test/cpp_headers/pipe.o 00:02:18.502 CXX test/cpp_headers/queue.o 00:02:18.502 CXX test/cpp_headers/reduce.o 00:02:18.760 CXX test/cpp_headers/rpc.o 00:02:18.760 LINK iscsi_tgt 00:02:18.760 CXX test/cpp_headers/scheduler.o 00:02:18.760 LINK jsoncat 00:02:18.760 CXX test/cpp_headers/scsi.o 00:02:18.760 CXX test/cpp_headers/scsi_spec.o 00:02:18.760 LINK histogram_perf 00:02:18.760 CXX test/cpp_headers/sock.o 00:02:18.760 CXX test/cpp_headers/stdinc.o 00:02:18.761 CXX test/cpp_headers/string.o 00:02:18.761 CXX test/cpp_headers/thread.o 00:02:18.761 CXX test/cpp_headers/trace.o 00:02:18.761 CXX test/cpp_headers/trace_parser.o 00:02:18.761 LINK poller_perf 00:02:18.761 CXX test/cpp_headers/tree.o 00:02:18.761 CXX test/cpp_headers/ublk.o 00:02:18.761 LINK vtophys 00:02:18.761 CXX test/cpp_headers/util.o 00:02:18.761 CXX test/cpp_headers/uuid.o 00:02:18.761 CXX test/cpp_headers/version.o 00:02:18.761 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.761 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.761 LINK stub 00:02:18.761 CXX test/cpp_headers/vhost.o 00:02:18.761 CXX test/cpp_headers/vmd.o 00:02:18.761 CXX test/cpp_headers/xor.o 00:02:18.761 LINK env_dpdk_post_init 00:02:18.761 LINK verify 00:02:18.761 CXX test/cpp_headers/zipf.o 00:02:18.761 LINK spdk_dd 00:02:18.761 LINK spdk_trace 00:02:18.761 LINK bdev_svc 00:02:18.761 LINK ioat_perf 00:02:18.761 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.761 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.017 LINK pci_ut 00:02:19.017 LINK spdk_nvme 00:02:19.017 LINK nvme_fuzz 00:02:19.017 CC examples/sock/hello_world/hello_sock.o 00:02:19.017 CC examples/idxd/perf/perf.o 00:02:19.017 CC app/vhost/vhost.o 00:02:19.017 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.017 CC examples/vmd/led/led.o 00:02:19.017 LINK test_dma 00:02:19.017 CC test/event/reactor/reactor.o 00:02:19.017 LINK spdk_bdev 00:02:19.017 CC test/event/reactor_perf/reactor_perf.o 00:02:19.017 CC examples/thread/thread/thread_ex.o 00:02:19.017 CC test/event/event_perf/event_perf.o 00:02:19.017 CC test/event/app_repeat/app_repeat.o 00:02:19.017 CC test/event/scheduler/scheduler.o 00:02:19.277 LINK spdk_nvme_perf 00:02:19.277 LINK spdk_top 00:02:19.277 LINK spdk_nvme_identify 00:02:19.277 LINK vhost_fuzz 00:02:19.277 LINK lsvmd 00:02:19.277 LINK mem_callbacks 00:02:19.277 LINK led 00:02:19.277 LINK reactor 00:02:19.277 LINK event_perf 00:02:19.277 LINK reactor_perf 00:02:19.277 LINK app_repeat 00:02:19.277 LINK hello_sock 00:02:19.277 LINK vhost 00:02:19.277 LINK idxd_perf 00:02:19.277 LINK thread 00:02:19.277 LINK scheduler 00:02:19.536 LINK memory_ut 00:02:19.536 CC test/nvme/reset/reset.o 00:02:19.536 CC test/nvme/startup/startup.o 00:02:19.536 CC test/nvme/cuse/cuse.o 00:02:19.536 CC test/nvme/overhead/overhead.o 00:02:19.536 CC test/nvme/fdp/fdp.o 00:02:19.536 CC test/nvme/reserve/reserve.o 00:02:19.536 CC test/nvme/e2edp/nvme_dp.o 00:02:19.536 CC test/nvme/sgl/sgl.o 00:02:19.536 CC test/nvme/simple_copy/simple_copy.o 00:02:19.536 CC test/nvme/boot_partition/boot_partition.o 00:02:19.536 CC test/nvme/err_injection/err_injection.o 00:02:19.536 CC test/nvme/compliance/nvme_compliance.o 00:02:19.536 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.536 CC test/nvme/connect_stress/connect_stress.o 00:02:19.536 CC test/nvme/aer/aer.o 00:02:19.536 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.536 CC test/accel/dif/dif.o 00:02:19.536 CC test/blobfs/mkfs/mkfs.o 00:02:19.795 CC test/lvol/esnap/esnap.o 00:02:19.795 CC examples/nvme/hotplug/hotplug.o 00:02:19.795 LINK startup 00:02:19.795 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.795 CC examples/nvme/arbitration/arbitration.o 00:02:19.795 CC examples/nvme/abort/abort.o 00:02:19.795 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.795 CC examples/nvme/reconnect/reconnect.o 00:02:19.795 CC examples/nvme/hello_world/hello_world.o 00:02:19.795 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.795 LINK connect_stress 00:02:19.795 LINK reserve 00:02:19.795 LINK err_injection 00:02:19.795 LINK fused_ordering 00:02:19.795 LINK doorbell_aers 00:02:19.795 LINK boot_partition 00:02:19.795 LINK simple_copy 00:02:19.795 LINK reset 00:02:19.795 LINK nvme_dp 00:02:19.795 LINK mkfs 00:02:19.795 LINK sgl 00:02:19.795 LINK overhead 00:02:19.795 CC examples/accel/perf/accel_perf.o 00:02:19.795 LINK aer 00:02:19.795 LINK nvme_compliance 00:02:19.795 CC examples/blob/hello_world/hello_blob.o 00:02:19.795 CC examples/blob/cli/blobcli.o 00:02:19.795 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:20.055 LINK fdp 00:02:20.055 LINK pmr_persistence 00:02:20.055 LINK hotplug 00:02:20.055 LINK cmb_copy 00:02:20.055 LINK hello_world 00:02:20.055 LINK iscsi_fuzz 00:02:20.055 LINK arbitration 00:02:20.055 LINK reconnect 00:02:20.055 LINK abort 00:02:20.055 LINK hello_blob 00:02:20.055 LINK dif 00:02:20.055 LINK nvme_manage 00:02:20.314 LINK hello_fsdev 00:02:20.314 LINK accel_perf 00:02:20.314 LINK blobcli 00:02:20.572 LINK cuse 00:02:20.572 CC test/bdev/bdevio/bdevio.o 00:02:20.831 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.831 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.090 LINK hello_bdev 00:02:21.090 LINK bdevio 00:02:21.349 LINK bdevperf 00:02:21.918 CC examples/nvmf/nvmf/nvmf.o 00:02:22.177 LINK nvmf 00:02:23.553 LINK esnap 00:02:23.553 00:02:23.553 real 0m55.592s 00:02:23.553 user 7m59.389s 00:02:23.553 sys 3m38.671s 00:02:23.553 10:19:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:23.553 10:19:24 make -- common/autotest_common.sh@10 -- $ set +x 00:02:23.553 ************************************ 00:02:23.553 END TEST make 00:02:23.553 ************************************ 00:02:23.553 10:19:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:23.553 10:19:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:23.553 10:19:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:23.553 10:19:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.553 10:19:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:23.553 10:19:24 -- pm/common@44 -- $ pid=3208314 00:02:23.553 10:19:24 -- pm/common@50 -- $ kill -TERM 3208314 00:02:23.553 10:19:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.553 10:19:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:23.553 10:19:24 -- pm/common@44 -- $ pid=3208315 00:02:23.553 10:19:24 -- pm/common@50 -- $ kill -TERM 3208315 00:02:23.553 10:19:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.553 10:19:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:23.553 10:19:24 -- pm/common@44 -- $ pid=3208317 00:02:23.553 10:19:24 -- pm/common@50 -- $ kill -TERM 3208317 00:02:23.553 10:19:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.553 10:19:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:23.553 10:19:24 -- pm/common@44 -- $ pid=3208344 00:02:23.553 10:19:24 -- pm/common@50 -- $ sudo -E kill -TERM 3208344 00:02:23.553 10:19:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:23.553 10:19:24 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:23.812 10:19:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:23.812 10:19:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:23.812 10:19:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:23.812 10:19:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:23.812 10:19:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:23.812 10:19:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:23.812 10:19:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:23.812 10:19:24 -- scripts/common.sh@336 -- # IFS=.-: 00:02:23.812 10:19:24 -- scripts/common.sh@336 -- # read -ra ver1 00:02:23.812 10:19:24 -- scripts/common.sh@337 -- # IFS=.-: 00:02:23.812 10:19:24 -- scripts/common.sh@337 -- # read -ra ver2 00:02:23.812 10:19:24 -- scripts/common.sh@338 -- # local 'op=<' 00:02:23.812 10:19:24 -- scripts/common.sh@340 -- # ver1_l=2 00:02:23.812 10:19:24 -- scripts/common.sh@341 -- # ver2_l=1 00:02:23.812 10:19:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:23.812 10:19:24 -- scripts/common.sh@344 -- # case "$op" in 00:02:23.812 10:19:24 -- scripts/common.sh@345 -- # : 1 00:02:23.812 10:19:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:23.812 10:19:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:23.812 10:19:24 -- scripts/common.sh@365 -- # decimal 1 00:02:23.812 10:19:24 -- scripts/common.sh@353 -- # local d=1 00:02:23.812 10:19:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:23.812 10:19:24 -- scripts/common.sh@355 -- # echo 1 00:02:23.812 10:19:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:23.812 10:19:24 -- scripts/common.sh@366 -- # decimal 2 00:02:23.812 10:19:24 -- scripts/common.sh@353 -- # local d=2 00:02:23.813 10:19:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:23.813 10:19:24 -- scripts/common.sh@355 -- # echo 2 00:02:23.813 10:19:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:23.813 10:19:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:23.813 10:19:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:23.813 10:19:24 -- scripts/common.sh@368 -- # return 0 00:02:23.813 10:19:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:23.813 10:19:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.813 --rc genhtml_branch_coverage=1 00:02:23.813 --rc genhtml_function_coverage=1 00:02:23.813 --rc genhtml_legend=1 00:02:23.813 --rc geninfo_all_blocks=1 00:02:23.813 --rc geninfo_unexecuted_blocks=1 00:02:23.813 00:02:23.813 ' 00:02:23.813 10:19:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.813 --rc genhtml_branch_coverage=1 00:02:23.813 --rc genhtml_function_coverage=1 00:02:23.813 --rc genhtml_legend=1 00:02:23.813 --rc geninfo_all_blocks=1 00:02:23.813 --rc geninfo_unexecuted_blocks=1 00:02:23.813 00:02:23.813 ' 00:02:23.813 10:19:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.813 --rc genhtml_branch_coverage=1 00:02:23.813 --rc genhtml_function_coverage=1 00:02:23.813 --rc genhtml_legend=1 00:02:23.813 --rc geninfo_all_blocks=1 00:02:23.813 --rc geninfo_unexecuted_blocks=1 00:02:23.813 00:02:23.813 ' 00:02:23.813 10:19:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.813 --rc genhtml_branch_coverage=1 00:02:23.813 --rc genhtml_function_coverage=1 00:02:23.813 --rc genhtml_legend=1 00:02:23.813 --rc geninfo_all_blocks=1 00:02:23.813 --rc geninfo_unexecuted_blocks=1 00:02:23.813 00:02:23.813 ' 00:02:23.813 10:19:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.813 10:19:24 -- nvmf/common.sh@7 -- # uname -s 00:02:23.813 10:19:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.813 10:19:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.813 10:19:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.813 10:19:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.813 10:19:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.813 10:19:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.813 10:19:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.813 10:19:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.813 10:19:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.813 10:19:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.813 10:19:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:23.813 10:19:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:23.813 10:19:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.813 10:19:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.813 10:19:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:23.813 10:19:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.813 10:19:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:23.813 10:19:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:23.813 10:19:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.813 10:19:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.813 10:19:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.813 10:19:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.813 10:19:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.813 10:19:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.813 10:19:24 -- paths/export.sh@5 -- # export PATH 00:02:23.813 10:19:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.813 10:19:24 -- nvmf/common.sh@51 -- # : 0 00:02:23.813 10:19:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:23.813 10:19:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:23.813 10:19:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.813 10:19:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.813 10:19:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.813 10:19:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:23.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:23.813 10:19:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:23.813 10:19:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:23.813 10:19:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:23.813 10:19:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.813 10:19:24 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.813 10:19:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.813 10:19:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.813 10:19:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.813 10:19:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.813 10:19:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.813 10:19:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.813 10:19:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.813 10:19:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.813 10:19:24 -- spdk/autotest.sh@48 -- # udevadm_pid=3271167 00:02:23.813 10:19:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.813 10:19:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.813 10:19:24 -- pm/common@17 -- # local monitor 00:02:23.813 10:19:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.813 10:19:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.813 10:19:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.813 10:19:24 -- pm/common@21 -- # date +%s 00:02:23.813 10:19:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.813 10:19:24 -- pm/common@21 -- # date +%s 00:02:23.813 10:19:24 -- pm/common@25 -- # sleep 1 00:02:23.813 10:19:24 -- pm/common@21 -- # date +%s 00:02:23.813 10:19:24 -- pm/common@21 -- # date +%s 00:02:23.813 10:19:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094364 00:02:23.813 10:19:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094364 00:02:23.813 10:19:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094364 00:02:23.813 10:19:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094364 00:02:23.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094364_collect-cpu-load.pm.log 00:02:23.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094364_collect-vmstat.pm.log 00:02:23.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094364_collect-cpu-temp.pm.log 00:02:23.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094364_collect-bmc-pm.bmc.pm.log 00:02:24.750 10:19:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:24.750 10:19:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:24.750 10:19:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:24.750 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:02:24.750 10:19:25 -- spdk/autotest.sh@59 -- # create_test_list 00:02:24.750 10:19:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:24.750 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:02:24.750 10:19:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:25.009 10:19:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.009 10:19:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.009 10:19:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:25.009 10:19:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.009 10:19:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:25.009 10:19:25 -- common/autotest_common.sh@1457 -- # uname 00:02:25.009 10:19:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:25.009 10:19:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:25.009 10:19:25 -- common/autotest_common.sh@1477 -- # uname 00:02:25.009 10:19:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:25.009 10:19:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:25.009 10:19:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:25.009 lcov: LCOV version 1.15 00:02:25.009 10:19:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:46.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:46.948 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:50.238 10:19:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:50.238 10:19:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:50.238 10:19:50 -- common/autotest_common.sh@10 -- # set +x 00:02:50.238 10:19:50 -- spdk/autotest.sh@78 -- # rm -f 00:02:50.238 10:19:50 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.774 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:52.774 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.774 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.774 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.774 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.033 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.293 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.293 10:19:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:53.293 10:19:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:53.293 10:19:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:53.293 10:19:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:53.293 10:19:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:53.293 10:19:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:53.293 10:19:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:53.293 10:19:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.293 10:19:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:53.293 10:19:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:53.293 10:19:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.293 10:19:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:53.293 10:19:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:53.293 10:19:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:53.293 10:19:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.293 No valid GPT data, bailing 00:02:53.293 10:19:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.293 10:19:53 -- scripts/common.sh@394 -- # pt= 00:02:53.293 10:19:53 -- scripts/common.sh@395 -- # return 1 00:02:53.293 10:19:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.293 1+0 records in 00:02:53.293 1+0 records out 00:02:53.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439837 s, 238 MB/s 00:02:53.293 10:19:53 -- spdk/autotest.sh@105 -- # sync 00:02:53.293 10:19:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.293 10:19:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.293 10:19:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:59.861 10:19:59 -- spdk/autotest.sh@111 -- # uname -s 00:02:59.861 10:19:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:59.861 10:19:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:59.861 10:19:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.766 Hugepages 00:03:01.766 node hugesize free / total 00:03:01.766 node0 1048576kB 0 / 0 00:03:01.766 node0 2048kB 0 / 0 00:03:01.766 node1 1048576kB 0 / 0 00:03:01.766 node1 2048kB 0 / 0 00:03:01.766 00:03:01.766 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.766 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:01.766 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:01.766 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:01.766 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:01.766 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:01.766 10:20:02 -- spdk/autotest.sh@117 -- # uname -s 00:03:01.766 10:20:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:01.766 10:20:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:01.766 10:20:02 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.057 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.625 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.625 10:20:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:06.567 10:20:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:06.567 10:20:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:06.567 10:20:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:06.567 10:20:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:06.567 10:20:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:06.567 10:20:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:06.567 10:20:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:06.567 10:20:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:06.567 10:20:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:06.826 10:20:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:06.826 10:20:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:06.826 10:20:07 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.364 Waiting for block devices as requested 00:03:09.624 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:09.624 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:09.624 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:09.883 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:09.883 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:09.883 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:10.142 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:10.142 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:10.142 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:10.402 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:10.402 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:10.402 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:10.402 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:10.661 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:10.661 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:10.661 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:10.920 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:10.920 10:20:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:10.920 10:20:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:10.920 10:20:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:10.920 10:20:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:10.920 10:20:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:10.920 10:20:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:10.920 10:20:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:10.920 10:20:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:10.920 10:20:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:10.920 10:20:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:10.920 10:20:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:10.920 10:20:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:10.920 10:20:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:10.920 10:20:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:10.920 10:20:11 -- common/autotest_common.sh@1543 -- # continue 00:03:10.920 10:20:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:10.920 10:20:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:10.920 10:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:10.920 10:20:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:10.920 10:20:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.920 10:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:10.920 10:20:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.209 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.209 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.777 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.777 10:20:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:14.777 10:20:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:14.777 10:20:15 -- common/autotest_common.sh@10 -- # set +x 00:03:15.036 10:20:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:15.036 10:20:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:15.036 10:20:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:15.036 10:20:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:15.036 10:20:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:15.036 10:20:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:15.036 10:20:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:15.036 10:20:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:15.036 10:20:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:15.036 10:20:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:15.036 10:20:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.036 10:20:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.036 10:20:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:15.036 10:20:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:15.036 10:20:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:15.036 10:20:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:15.036 10:20:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:15.036 10:20:15 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:15.036 10:20:15 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:15.036 10:20:15 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:15.036 10:20:15 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:15.036 10:20:15 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:15.036 10:20:15 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:15.036 10:20:15 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3285579 00:03:15.036 10:20:15 -- common/autotest_common.sh@1585 -- # waitforlisten 3285579 00:03:15.036 10:20:15 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:15.036 10:20:15 -- common/autotest_common.sh@835 -- # '[' -z 3285579 ']' 00:03:15.036 10:20:15 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:15.036 10:20:15 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:15.036 10:20:15 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:15.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:15.036 10:20:15 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:15.036 10:20:15 -- common/autotest_common.sh@10 -- # set +x 00:03:15.036 [2024-11-20 10:20:15.664771] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:15.036 [2024-11-20 10:20:15.664820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285579 ] 00:03:15.036 [2024-11-20 10:20:15.744219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.298 [2024-11-20 10:20:15.785921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:16.015 10:20:16 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:16.015 10:20:16 -- common/autotest_common.sh@868 -- # return 0 00:03:16.015 10:20:16 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:16.015 10:20:16 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:16.015 10:20:16 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:19.304 nvme0n1 00:03:19.304 10:20:19 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:19.304 [2024-11-20 10:20:19.700349] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:19.304 request: 00:03:19.304 { 00:03:19.304 "nvme_ctrlr_name": "nvme0", 00:03:19.304 "password": "test", 00:03:19.304 "method": "bdev_nvme_opal_revert", 00:03:19.304 "req_id": 1 00:03:19.304 } 00:03:19.304 Got JSON-RPC error response 00:03:19.304 response: 00:03:19.304 { 00:03:19.304 "code": -32602, 00:03:19.304 "message": "Invalid parameters" 00:03:19.304 } 00:03:19.304 10:20:19 -- common/autotest_common.sh@1591 -- # true 00:03:19.304 10:20:19 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:19.304 10:20:19 -- common/autotest_common.sh@1595 -- # killprocess 3285579 00:03:19.304 10:20:19 -- common/autotest_common.sh@954 -- # '[' -z 3285579 ']' 00:03:19.304 10:20:19 -- common/autotest_common.sh@958 -- # kill -0 3285579 00:03:19.304 10:20:19 -- common/autotest_common.sh@959 -- # uname 00:03:19.304 10:20:19 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:19.304 10:20:19 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3285579 00:03:19.304 10:20:19 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:19.304 10:20:19 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:19.304 10:20:19 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3285579' 00:03:19.304 killing process with pid 3285579 00:03:19.304 10:20:19 -- common/autotest_common.sh@973 -- # kill 3285579 00:03:19.304 10:20:19 -- common/autotest_common.sh@978 -- # wait 3285579 00:03:21.210 10:20:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:21.210 10:20:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:21.210 10:20:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:21.210 10:20:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:21.210 10:20:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:21.210 10:20:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.210 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:03:21.210 10:20:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:21.210 10:20:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:21.210 10:20:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.210 10:20:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.210 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:03:21.210 ************************************ 00:03:21.210 START TEST env 00:03:21.210 ************************************ 00:03:21.210 10:20:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:21.210 * Looking for test storage... 00:03:21.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:21.210 10:20:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:21.210 10:20:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:21.211 10:20:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.211 10:20:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.211 10:20:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.211 10:20:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.211 10:20:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.211 10:20:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.211 10:20:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.211 10:20:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.211 10:20:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.211 10:20:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.211 10:20:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.211 10:20:21 env -- scripts/common.sh@344 -- # case "$op" in 00:03:21.211 10:20:21 env -- scripts/common.sh@345 -- # : 1 00:03:21.211 10:20:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.211 10:20:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.211 10:20:21 env -- scripts/common.sh@365 -- # decimal 1 00:03:21.211 10:20:21 env -- scripts/common.sh@353 -- # local d=1 00:03:21.211 10:20:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.211 10:20:21 env -- scripts/common.sh@355 -- # echo 1 00:03:21.211 10:20:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.211 10:20:21 env -- scripts/common.sh@366 -- # decimal 2 00:03:21.211 10:20:21 env -- scripts/common.sh@353 -- # local d=2 00:03:21.211 10:20:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.211 10:20:21 env -- scripts/common.sh@355 -- # echo 2 00:03:21.211 10:20:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.211 10:20:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.211 10:20:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.211 10:20:21 env -- scripts/common.sh@368 -- # return 0 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:21.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.211 --rc genhtml_branch_coverage=1 00:03:21.211 --rc genhtml_function_coverage=1 00:03:21.211 --rc genhtml_legend=1 00:03:21.211 --rc geninfo_all_blocks=1 00:03:21.211 --rc geninfo_unexecuted_blocks=1 00:03:21.211 00:03:21.211 ' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:21.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.211 --rc genhtml_branch_coverage=1 00:03:21.211 --rc genhtml_function_coverage=1 00:03:21.211 --rc genhtml_legend=1 00:03:21.211 --rc geninfo_all_blocks=1 00:03:21.211 --rc geninfo_unexecuted_blocks=1 00:03:21.211 00:03:21.211 ' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:21.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.211 --rc genhtml_branch_coverage=1 00:03:21.211 --rc genhtml_function_coverage=1 00:03:21.211 --rc genhtml_legend=1 00:03:21.211 --rc geninfo_all_blocks=1 00:03:21.211 --rc geninfo_unexecuted_blocks=1 00:03:21.211 00:03:21.211 ' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:21.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.211 --rc genhtml_branch_coverage=1 00:03:21.211 --rc genhtml_function_coverage=1 00:03:21.211 --rc genhtml_legend=1 00:03:21.211 --rc geninfo_all_blocks=1 00:03:21.211 --rc geninfo_unexecuted_blocks=1 00:03:21.211 00:03:21.211 ' 00:03:21.211 10:20:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.211 10:20:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.211 ************************************ 00:03:21.211 START TEST env_memory 00:03:21.211 ************************************ 00:03:21.211 10:20:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.211 00:03:21.211 00:03:21.211 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.211 http://cunit.sourceforge.net/ 00:03:21.211 00:03:21.211 00:03:21.211 Suite: memory 00:03:21.211 Test: alloc and free memory map ...[2024-11-20 10:20:21.721786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:21.211 passed 00:03:21.211 Test: mem map translation ...[2024-11-20 10:20:21.740628] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:21.211 [2024-11-20 10:20:21.740641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:21.211 [2024-11-20 10:20:21.740690] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:21.211 [2024-11-20 10:20:21.740696] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:21.211 passed 00:03:21.211 Test: mem map registration ...[2024-11-20 10:20:21.777394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:21.211 [2024-11-20 10:20:21.777407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:21.211 passed 00:03:21.211 Test: mem map adjacent registrations ...passed 00:03:21.211 00:03:21.211 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.211 suites 1 1 n/a 0 0 00:03:21.211 tests 4 4 4 0 0 00:03:21.211 asserts 152 152 152 0 n/a 00:03:21.211 00:03:21.211 Elapsed time = 0.137 seconds 00:03:21.211 00:03:21.211 real 0m0.150s 00:03:21.211 user 0m0.138s 00:03:21.211 sys 0m0.011s 00:03:21.211 10:20:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.211 10:20:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:21.211 ************************************ 00:03:21.211 END TEST env_memory 00:03:21.211 ************************************ 00:03:21.211 10:20:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.211 10:20:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.211 10:20:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.211 ************************************ 00:03:21.211 START TEST env_vtophys 00:03:21.211 ************************************ 00:03:21.211 10:20:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.211 EAL: lib.eal log level changed from notice to debug 00:03:21.211 EAL: Detected lcore 0 as core 0 on socket 0 00:03:21.211 EAL: Detected lcore 1 as core 1 on socket 0 00:03:21.211 EAL: Detected lcore 2 as core 2 on socket 0 00:03:21.211 EAL: Detected lcore 3 as core 3 on socket 0 00:03:21.211 EAL: Detected lcore 4 as core 4 on socket 0 00:03:21.211 EAL: Detected lcore 5 as core 5 on socket 0 00:03:21.211 EAL: Detected lcore 6 as core 6 on socket 0 00:03:21.211 EAL: Detected lcore 7 as core 8 on socket 0 00:03:21.211 EAL: Detected lcore 8 as core 9 on socket 0 00:03:21.211 EAL: Detected lcore 9 as core 10 on socket 0 00:03:21.211 EAL: Detected lcore 10 as core 11 on socket 0 00:03:21.211 EAL: Detected lcore 11 as core 12 on socket 0 00:03:21.211 EAL: Detected lcore 12 as core 13 on socket 0 00:03:21.211 EAL: Detected lcore 13 as core 16 on socket 0 00:03:21.211 EAL: Detected lcore 14 as core 17 on socket 0 00:03:21.211 EAL: Detected lcore 15 as core 18 on socket 0 00:03:21.211 EAL: Detected lcore 16 as core 19 on socket 0 00:03:21.211 EAL: Detected lcore 17 as core 20 on socket 0 00:03:21.211 EAL: Detected lcore 18 as core 21 on socket 0 00:03:21.211 EAL: Detected lcore 19 as core 25 on socket 0 00:03:21.211 EAL: Detected lcore 20 as core 26 on socket 0 00:03:21.211 EAL: Detected lcore 21 as core 27 on socket 0 00:03:21.211 EAL: Detected lcore 22 as core 28 on socket 0 00:03:21.211 EAL: Detected lcore 23 as core 29 on socket 0 00:03:21.211 EAL: Detected lcore 24 as core 0 on socket 1 00:03:21.211 EAL: Detected lcore 25 as core 1 on socket 1 00:03:21.211 EAL: Detected lcore 26 as core 2 on socket 1 00:03:21.211 EAL: Detected lcore 27 as core 3 on socket 1 00:03:21.211 EAL: Detected lcore 28 as core 4 on socket 1 00:03:21.211 EAL: Detected lcore 29 as core 5 on socket 1 00:03:21.211 EAL: Detected lcore 30 as core 6 on socket 1 00:03:21.211 EAL: Detected lcore 31 as core 9 on socket 1 00:03:21.211 EAL: Detected lcore 32 as core 10 on socket 1 00:03:21.211 EAL: Detected lcore 33 as core 11 on socket 1 00:03:21.211 EAL: Detected lcore 34 as core 12 on socket 1 00:03:21.211 EAL: Detected lcore 35 as core 13 on socket 1 00:03:21.211 EAL: Detected lcore 36 as core 16 on socket 1 00:03:21.211 EAL: Detected lcore 37 as core 17 on socket 1 00:03:21.211 EAL: Detected lcore 38 as core 18 on socket 1 00:03:21.211 EAL: Detected lcore 39 as core 19 on socket 1 00:03:21.211 EAL: Detected lcore 40 as core 20 on socket 1 00:03:21.211 EAL: Detected lcore 41 as core 21 on socket 1 00:03:21.211 EAL: Detected lcore 42 as core 24 on socket 1 00:03:21.211 EAL: Detected lcore 43 as core 25 on socket 1 00:03:21.211 EAL: Detected lcore 44 as core 26 on socket 1 00:03:21.211 EAL: Detected lcore 45 as core 27 on socket 1 00:03:21.211 EAL: Detected lcore 46 as core 28 on socket 1 00:03:21.211 EAL: Detected lcore 47 as core 29 on socket 1 00:03:21.211 EAL: Detected lcore 48 as core 0 on socket 0 00:03:21.211 EAL: Detected lcore 49 as core 1 on socket 0 00:03:21.211 EAL: Detected lcore 50 as core 2 on socket 0 00:03:21.211 EAL: Detected lcore 51 as core 3 on socket 0 00:03:21.211 EAL: Detected lcore 52 as core 4 on socket 0 00:03:21.211 EAL: Detected lcore 53 as core 5 on socket 0 00:03:21.212 EAL: Detected lcore 54 as core 6 on socket 0 00:03:21.212 EAL: Detected lcore 55 as core 8 on socket 0 00:03:21.212 EAL: Detected lcore 56 as core 9 on socket 0 00:03:21.212 EAL: Detected lcore 57 as core 10 on socket 0 00:03:21.212 EAL: Detected lcore 58 as core 11 on socket 0 00:03:21.212 EAL: Detected lcore 59 as core 12 on socket 0 00:03:21.212 EAL: Detected lcore 60 as core 13 on socket 0 00:03:21.212 EAL: Detected lcore 61 as core 16 on socket 0 00:03:21.212 EAL: Detected lcore 62 as core 17 on socket 0 00:03:21.212 EAL: Detected lcore 63 as core 18 on socket 0 00:03:21.212 EAL: Detected lcore 64 as core 19 on socket 0 00:03:21.212 EAL: Detected lcore 65 as core 20 on socket 0 00:03:21.212 EAL: Detected lcore 66 as core 21 on socket 0 00:03:21.212 EAL: Detected lcore 67 as core 25 on socket 0 00:03:21.212 EAL: Detected lcore 68 as core 26 on socket 0 00:03:21.212 EAL: Detected lcore 69 as core 27 on socket 0 00:03:21.212 EAL: Detected lcore 70 as core 28 on socket 0 00:03:21.212 EAL: Detected lcore 71 as core 29 on socket 0 00:03:21.212 EAL: Detected lcore 72 as core 0 on socket 1 00:03:21.212 EAL: Detected lcore 73 as core 1 on socket 1 00:03:21.212 EAL: Detected lcore 74 as core 2 on socket 1 00:03:21.212 EAL: Detected lcore 75 as core 3 on socket 1 00:03:21.212 EAL: Detected lcore 76 as core 4 on socket 1 00:03:21.212 EAL: Detected lcore 77 as core 5 on socket 1 00:03:21.212 EAL: Detected lcore 78 as core 6 on socket 1 00:03:21.212 EAL: Detected lcore 79 as core 9 on socket 1 00:03:21.212 EAL: Detected lcore 80 as core 10 on socket 1 00:03:21.212 EAL: Detected lcore 81 as core 11 on socket 1 00:03:21.212 EAL: Detected lcore 82 as core 12 on socket 1 00:03:21.212 EAL: Detected lcore 83 as core 13 on socket 1 00:03:21.212 EAL: Detected lcore 84 as core 16 on socket 1 00:03:21.212 EAL: Detected lcore 85 as core 17 on socket 1 00:03:21.212 EAL: Detected lcore 86 as core 18 on socket 1 00:03:21.212 EAL: Detected lcore 87 as core 19 on socket 1 00:03:21.212 EAL: Detected lcore 88 as core 20 on socket 1 00:03:21.212 EAL: Detected lcore 89 as core 21 on socket 1 00:03:21.212 EAL: Detected lcore 90 as core 24 on socket 1 00:03:21.212 EAL: Detected lcore 91 as core 25 on socket 1 00:03:21.212 EAL: Detected lcore 92 as core 26 on socket 1 00:03:21.212 EAL: Detected lcore 93 as core 27 on socket 1 00:03:21.212 EAL: Detected lcore 94 as core 28 on socket 1 00:03:21.212 EAL: Detected lcore 95 as core 29 on socket 1 00:03:21.212 EAL: Maximum logical cores by configuration: 128 00:03:21.212 EAL: Detected CPU lcores: 96 00:03:21.212 EAL: Detected NUMA nodes: 2 00:03:21.212 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:21.212 EAL: Detected shared linkage of DPDK 00:03:21.212 EAL: No shared files mode enabled, IPC will be disabled 00:03:21.472 EAL: Bus pci wants IOVA as 'DC' 00:03:21.472 EAL: Buses did not request a specific IOVA mode. 00:03:21.472 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:21.472 EAL: Selected IOVA mode 'VA' 00:03:21.472 EAL: Probing VFIO support... 00:03:21.472 EAL: IOMMU type 1 (Type 1) is supported 00:03:21.472 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:21.472 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:21.472 EAL: VFIO support initialized 00:03:21.472 EAL: Ask a virtual area of 0x2e000 bytes 00:03:21.472 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:21.472 EAL: Setting up physically contiguous memory... 00:03:21.472 EAL: Setting maximum number of open files to 524288 00:03:21.472 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:21.472 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:21.472 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:21.472 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:21.472 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.472 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:21.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.472 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.472 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:21.472 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:21.472 EAL: Hugepages will be freed exactly as allocated. 00:03:21.472 EAL: No shared files mode enabled, IPC is disabled 00:03:21.472 EAL: No shared files mode enabled, IPC is disabled 00:03:21.472 EAL: TSC frequency is ~2300000 KHz 00:03:21.472 EAL: Main lcore 0 is ready (tid=7f567981ca00;cpuset=[0]) 00:03:21.472 EAL: Trying to obtain current memory policy. 00:03:21.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.472 EAL: Restoring previous memory policy: 0 00:03:21.472 EAL: request: mp_malloc_sync 00:03:21.472 EAL: No shared files mode enabled, IPC is disabled 00:03:21.472 EAL: Heap on socket 0 was expanded by 2MB 00:03:21.472 EAL: No shared files mode enabled, IPC is disabled 00:03:21.472 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:21.472 EAL: Mem event callback 'spdk:(nil)' registered 00:03:21.472 00:03:21.472 00:03:21.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.472 http://cunit.sourceforge.net/ 00:03:21.472 00:03:21.472 00:03:21.472 Suite: components_suite 00:03:21.472 Test: vtophys_malloc_test ...passed 00:03:21.472 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:21.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.472 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 4MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 4MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 6MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 6MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 10MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 10MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 18MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 18MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 34MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 34MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 66MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 66MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 130MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was shrunk by 130MB 00:03:21.473 EAL: Trying to obtain current memory policy. 00:03:21.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.473 EAL: Restoring previous memory policy: 4 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.473 EAL: request: mp_malloc_sync 00:03:21.473 EAL: No shared files mode enabled, IPC is disabled 00:03:21.473 EAL: Heap on socket 0 was expanded by 258MB 00:03:21.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.732 EAL: request: mp_malloc_sync 00:03:21.732 EAL: No shared files mode enabled, IPC is disabled 00:03:21.732 EAL: Heap on socket 0 was shrunk by 258MB 00:03:21.732 EAL: Trying to obtain current memory policy. 00:03:21.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.732 EAL: Restoring previous memory policy: 4 00:03:21.732 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.732 EAL: request: mp_malloc_sync 00:03:21.732 EAL: No shared files mode enabled, IPC is disabled 00:03:21.732 EAL: Heap on socket 0 was expanded by 514MB 00:03:21.732 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.991 EAL: request: mp_malloc_sync 00:03:21.991 EAL: No shared files mode enabled, IPC is disabled 00:03:21.991 EAL: Heap on socket 0 was shrunk by 514MB 00:03:21.991 EAL: Trying to obtain current memory policy. 00:03:21.991 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.991 EAL: Restoring previous memory policy: 4 00:03:21.991 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.991 EAL: request: mp_malloc_sync 00:03:21.991 EAL: No shared files mode enabled, IPC is disabled 00:03:21.991 EAL: Heap on socket 0 was expanded by 1026MB 00:03:22.251 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.511 EAL: request: mp_malloc_sync 00:03:22.511 EAL: No shared files mode enabled, IPC is disabled 00:03:22.511 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:22.511 passed 00:03:22.511 00:03:22.511 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.511 suites 1 1 n/a 0 0 00:03:22.511 tests 2 2 2 0 0 00:03:22.511 asserts 497 497 497 0 n/a 00:03:22.511 00:03:22.511 Elapsed time = 0.983 seconds 00:03:22.511 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.511 EAL: request: mp_malloc_sync 00:03:22.511 EAL: No shared files mode enabled, IPC is disabled 00:03:22.511 EAL: Heap on socket 0 was shrunk by 2MB 00:03:22.511 EAL: No shared files mode enabled, IPC is disabled 00:03:22.511 EAL: No shared files mode enabled, IPC is disabled 00:03:22.511 EAL: No shared files mode enabled, IPC is disabled 00:03:22.511 00:03:22.511 real 0m1.121s 00:03:22.511 user 0m0.659s 00:03:22.511 sys 0m0.428s 00:03:22.511 10:20:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.511 10:20:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:22.511 ************************************ 00:03:22.511 END TEST env_vtophys 00:03:22.511 ************************************ 00:03:22.511 10:20:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.511 10:20:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.511 10:20:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.511 10:20:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.511 ************************************ 00:03:22.511 START TEST env_pci 00:03:22.511 ************************************ 00:03:22.511 10:20:23 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.511 00:03:22.511 00:03:22.511 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.511 http://cunit.sourceforge.net/ 00:03:22.511 00:03:22.511 00:03:22.511 Suite: pci 00:03:22.511 Test: pci_hook ...[2024-11-20 10:20:23.098872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3286902 has claimed it 00:03:22.511 EAL: Cannot find device (10000:00:01.0) 00:03:22.511 EAL: Failed to attach device on primary process 00:03:22.511 passed 00:03:22.511 00:03:22.511 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.511 suites 1 1 n/a 0 0 00:03:22.511 tests 1 1 1 0 0 00:03:22.511 asserts 25 25 25 0 n/a 00:03:22.511 00:03:22.511 Elapsed time = 0.028 seconds 00:03:22.511 00:03:22.511 real 0m0.045s 00:03:22.511 user 0m0.015s 00:03:22.511 sys 0m0.029s 00:03:22.511 10:20:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.511 10:20:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:22.511 ************************************ 00:03:22.511 END TEST env_pci 00:03:22.511 ************************************ 00:03:22.511 10:20:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:22.511 10:20:23 env -- env/env.sh@15 -- # uname 00:03:22.511 10:20:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:22.511 10:20:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:22.511 10:20:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.511 10:20:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:22.511 10:20:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.511 10:20:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.511 ************************************ 00:03:22.511 START TEST env_dpdk_post_init 00:03:22.511 ************************************ 00:03:22.511 10:20:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.511 EAL: Detected CPU lcores: 96 00:03:22.511 EAL: Detected NUMA nodes: 2 00:03:22.511 EAL: Detected shared linkage of DPDK 00:03:22.511 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.770 EAL: Selected IOVA mode 'VA' 00:03:22.770 EAL: VFIO support initialized 00:03:22.770 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.770 EAL: Using IOMMU type 1 (Type 1) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:22.770 EAL: Ignore mapping IO port bar(1) 00:03:22.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:23.707 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:23.707 EAL: Ignore mapping IO port bar(1) 00:03:23.707 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:26.995 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:26.995 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:26.995 Starting DPDK initialization... 00:03:26.995 Starting SPDK post initialization... 00:03:26.995 SPDK NVMe probe 00:03:26.995 Attaching to 0000:5e:00.0 00:03:26.995 Attached to 0000:5e:00.0 00:03:26.995 Cleaning up... 00:03:26.995 00:03:26.995 real 0m4.338s 00:03:26.995 user 0m2.967s 00:03:26.995 sys 0m0.443s 00:03:26.995 10:20:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.995 10:20:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:26.995 ************************************ 00:03:26.995 END TEST env_dpdk_post_init 00:03:26.995 ************************************ 00:03:26.995 10:20:27 env -- env/env.sh@26 -- # uname 00:03:26.995 10:20:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:26.995 10:20:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.995 10:20:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.995 10:20:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.995 10:20:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.995 ************************************ 00:03:26.995 START TEST env_mem_callbacks 00:03:26.995 ************************************ 00:03:26.995 10:20:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.995 EAL: Detected CPU lcores: 96 00:03:26.995 EAL: Detected NUMA nodes: 2 00:03:26.995 EAL: Detected shared linkage of DPDK 00:03:26.995 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.995 EAL: Selected IOVA mode 'VA' 00:03:26.995 EAL: VFIO support initialized 00:03:26.995 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.995 00:03:26.995 00:03:26.995 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.995 http://cunit.sourceforge.net/ 00:03:26.995 00:03:26.995 00:03:26.995 Suite: memory 00:03:26.995 Test: test ... 00:03:26.995 register 0x200000200000 2097152 00:03:26.995 malloc 3145728 00:03:26.995 register 0x200000400000 4194304 00:03:26.995 buf 0x200000500000 len 3145728 PASSED 00:03:26.995 malloc 64 00:03:26.995 buf 0x2000004fff40 len 64 PASSED 00:03:26.995 malloc 4194304 00:03:26.995 register 0x200000800000 6291456 00:03:26.995 buf 0x200000a00000 len 4194304 PASSED 00:03:26.995 free 0x200000500000 3145728 00:03:26.995 free 0x2000004fff40 64 00:03:26.995 unregister 0x200000400000 4194304 PASSED 00:03:26.995 free 0x200000a00000 4194304 00:03:26.995 unregister 0x200000800000 6291456 PASSED 00:03:26.995 malloc 8388608 00:03:26.995 register 0x200000400000 10485760 00:03:26.995 buf 0x200000600000 len 8388608 PASSED 00:03:26.995 free 0x200000600000 8388608 00:03:26.995 unregister 0x200000400000 10485760 PASSED 00:03:26.995 passed 00:03:26.995 00:03:26.995 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.995 suites 1 1 n/a 0 0 00:03:26.995 tests 1 1 1 0 0 00:03:26.995 asserts 15 15 15 0 n/a 00:03:26.995 00:03:26.995 Elapsed time = 0.008 seconds 00:03:26.995 00:03:26.995 real 0m0.059s 00:03:26.995 user 0m0.019s 00:03:26.995 sys 0m0.040s 00:03:26.995 10:20:27 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.995 10:20:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:26.995 ************************************ 00:03:26.995 END TEST env_mem_callbacks 00:03:26.995 ************************************ 00:03:26.995 00:03:26.995 real 0m6.245s 00:03:26.995 user 0m4.038s 00:03:26.995 sys 0m1.278s 00:03:26.995 10:20:27 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.995 10:20:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.995 ************************************ 00:03:26.995 END TEST env 00:03:26.995 ************************************ 00:03:27.255 10:20:27 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:27.255 10:20:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.255 10:20:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.255 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:27.255 ************************************ 00:03:27.255 START TEST rpc 00:03:27.255 ************************************ 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:27.255 * Looking for test storage... 00:03:27.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.255 10:20:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.255 10:20:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.255 10:20:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.255 10:20:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.255 10:20:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.255 10:20:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:27.255 10:20:27 rpc -- scripts/common.sh@345 -- # : 1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.255 10:20:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.255 10:20:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@353 -- # local d=1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.255 10:20:27 rpc -- scripts/common.sh@355 -- # echo 1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.255 10:20:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@353 -- # local d=2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.255 10:20:27 rpc -- scripts/common.sh@355 -- # echo 2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.255 10:20:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.255 10:20:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.255 10:20:27 rpc -- scripts/common.sh@368 -- # return 0 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:27.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.255 --rc genhtml_branch_coverage=1 00:03:27.255 --rc genhtml_function_coverage=1 00:03:27.255 --rc genhtml_legend=1 00:03:27.255 --rc geninfo_all_blocks=1 00:03:27.255 --rc geninfo_unexecuted_blocks=1 00:03:27.255 00:03:27.255 ' 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:27.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.255 --rc genhtml_branch_coverage=1 00:03:27.255 --rc genhtml_function_coverage=1 00:03:27.255 --rc genhtml_legend=1 00:03:27.255 --rc geninfo_all_blocks=1 00:03:27.255 --rc geninfo_unexecuted_blocks=1 00:03:27.255 00:03:27.255 ' 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:27.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.255 --rc genhtml_branch_coverage=1 00:03:27.255 --rc genhtml_function_coverage=1 00:03:27.255 --rc genhtml_legend=1 00:03:27.255 --rc geninfo_all_blocks=1 00:03:27.255 --rc geninfo_unexecuted_blocks=1 00:03:27.255 00:03:27.255 ' 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:27.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.255 --rc genhtml_branch_coverage=1 00:03:27.255 --rc genhtml_function_coverage=1 00:03:27.255 --rc genhtml_legend=1 00:03:27.255 --rc geninfo_all_blocks=1 00:03:27.255 --rc geninfo_unexecuted_blocks=1 00:03:27.255 00:03:27.255 ' 00:03:27.255 10:20:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3287872 00:03:27.255 10:20:27 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:27.255 10:20:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:27.255 10:20:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3287872 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@835 -- # '[' -z 3287872 ']' 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:27.255 10:20:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.514 [2024-11-20 10:20:28.027556] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:27.514 [2024-11-20 10:20:28.027606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287872 ] 00:03:27.514 [2024-11-20 10:20:28.101031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.514 [2024-11-20 10:20:28.143058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:27.514 [2024-11-20 10:20:28.143094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3287872' to capture a snapshot of events at runtime. 00:03:27.514 [2024-11-20 10:20:28.143104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:27.515 [2024-11-20 10:20:28.143110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:27.515 [2024-11-20 10:20:28.143115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3287872 for offline analysis/debug. 00:03:27.515 [2024-11-20 10:20:28.143696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.774 10:20:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:27.774 10:20:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:27.774 10:20:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.774 10:20:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.774 10:20:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:27.774 10:20:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:27.774 10:20:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.774 10:20:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.774 10:20:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.774 ************************************ 00:03:27.774 START TEST rpc_integrity 00:03:27.774 ************************************ 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.774 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:27.774 { 00:03:27.774 "name": "Malloc0", 00:03:27.774 "aliases": [ 00:03:27.774 "de724561-5b1c-41f9-b4da-cbd05deaf95b" 00:03:27.774 ], 00:03:27.774 "product_name": "Malloc disk", 00:03:27.774 "block_size": 512, 00:03:27.774 "num_blocks": 16384, 00:03:27.774 "uuid": "de724561-5b1c-41f9-b4da-cbd05deaf95b", 00:03:27.774 "assigned_rate_limits": { 00:03:27.774 "rw_ios_per_sec": 0, 00:03:27.774 "rw_mbytes_per_sec": 0, 00:03:27.774 "r_mbytes_per_sec": 0, 00:03:27.774 "w_mbytes_per_sec": 0 00:03:27.774 }, 00:03:27.774 "claimed": false, 00:03:27.774 "zoned": false, 00:03:27.774 "supported_io_types": { 00:03:27.774 "read": true, 00:03:27.774 "write": true, 00:03:27.774 "unmap": true, 00:03:27.774 "flush": true, 00:03:27.774 "reset": true, 00:03:27.774 "nvme_admin": false, 00:03:27.774 "nvme_io": false, 00:03:27.774 "nvme_io_md": false, 00:03:27.774 "write_zeroes": true, 00:03:27.774 "zcopy": true, 00:03:27.774 "get_zone_info": false, 00:03:27.774 "zone_management": false, 00:03:27.774 "zone_append": false, 00:03:27.774 "compare": false, 00:03:27.774 "compare_and_write": false, 00:03:27.774 "abort": true, 00:03:27.774 "seek_hole": false, 00:03:27.774 "seek_data": false, 00:03:27.774 "copy": true, 00:03:27.774 "nvme_iov_md": false 00:03:27.774 }, 00:03:27.774 "memory_domains": [ 00:03:27.774 { 00:03:27.774 "dma_device_id": "system", 00:03:27.774 "dma_device_type": 1 00:03:27.774 }, 00:03:27.774 { 00:03:27.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.774 "dma_device_type": 2 00:03:27.774 } 00:03:27.774 ], 00:03:27.774 "driver_specific": {} 00:03:27.774 } 00:03:27.774 ]' 00:03:27.774 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.033 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.033 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:28.033 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.033 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.033 [2024-11-20 10:20:28.514312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:28.033 [2024-11-20 10:20:28.514341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.033 [2024-11-20 10:20:28.514354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5d56e0 00:03:28.033 [2024-11-20 10:20:28.514361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.033 [2024-11-20 10:20:28.515476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.033 [2024-11-20 10:20:28.515497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.033 Passthru0 00:03:28.033 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.033 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.033 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.033 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.033 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.033 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.033 { 00:03:28.033 "name": "Malloc0", 00:03:28.033 "aliases": [ 00:03:28.033 "de724561-5b1c-41f9-b4da-cbd05deaf95b" 00:03:28.033 ], 00:03:28.033 "product_name": "Malloc disk", 00:03:28.033 "block_size": 512, 00:03:28.033 "num_blocks": 16384, 00:03:28.034 "uuid": "de724561-5b1c-41f9-b4da-cbd05deaf95b", 00:03:28.034 "assigned_rate_limits": { 00:03:28.034 "rw_ios_per_sec": 0, 00:03:28.034 "rw_mbytes_per_sec": 0, 00:03:28.034 "r_mbytes_per_sec": 0, 00:03:28.034 "w_mbytes_per_sec": 0 00:03:28.034 }, 00:03:28.034 "claimed": true, 00:03:28.034 "claim_type": "exclusive_write", 00:03:28.034 "zoned": false, 00:03:28.034 "supported_io_types": { 00:03:28.034 "read": true, 00:03:28.034 "write": true, 00:03:28.034 "unmap": true, 00:03:28.034 "flush": true, 00:03:28.034 "reset": true, 00:03:28.034 "nvme_admin": false, 00:03:28.034 "nvme_io": false, 00:03:28.034 "nvme_io_md": false, 00:03:28.034 "write_zeroes": true, 00:03:28.034 "zcopy": true, 00:03:28.034 "get_zone_info": false, 00:03:28.034 "zone_management": false, 00:03:28.034 "zone_append": false, 00:03:28.034 "compare": false, 00:03:28.034 "compare_and_write": false, 00:03:28.034 "abort": true, 00:03:28.034 "seek_hole": false, 00:03:28.034 "seek_data": false, 00:03:28.034 "copy": true, 00:03:28.034 "nvme_iov_md": false 00:03:28.034 }, 00:03:28.034 "memory_domains": [ 00:03:28.034 { 00:03:28.034 "dma_device_id": "system", 00:03:28.034 "dma_device_type": 1 00:03:28.034 }, 00:03:28.034 { 00:03:28.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.034 "dma_device_type": 2 00:03:28.034 } 00:03:28.034 ], 00:03:28.034 "driver_specific": {} 00:03:28.034 }, 00:03:28.034 { 00:03:28.034 "name": "Passthru0", 00:03:28.034 "aliases": [ 00:03:28.034 "696e1abd-9f3b-5407-a6c3-d678246dea93" 00:03:28.034 ], 00:03:28.034 "product_name": "passthru", 00:03:28.034 "block_size": 512, 00:03:28.034 "num_blocks": 16384, 00:03:28.034 "uuid": "696e1abd-9f3b-5407-a6c3-d678246dea93", 00:03:28.034 "assigned_rate_limits": { 00:03:28.034 "rw_ios_per_sec": 0, 00:03:28.034 "rw_mbytes_per_sec": 0, 00:03:28.034 "r_mbytes_per_sec": 0, 00:03:28.034 "w_mbytes_per_sec": 0 00:03:28.034 }, 00:03:28.034 "claimed": false, 00:03:28.034 "zoned": false, 00:03:28.034 "supported_io_types": { 00:03:28.034 "read": true, 00:03:28.034 "write": true, 00:03:28.034 "unmap": true, 00:03:28.034 "flush": true, 00:03:28.034 "reset": true, 00:03:28.034 "nvme_admin": false, 00:03:28.034 "nvme_io": false, 00:03:28.034 "nvme_io_md": false, 00:03:28.034 "write_zeroes": true, 00:03:28.034 "zcopy": true, 00:03:28.034 "get_zone_info": false, 00:03:28.034 "zone_management": false, 00:03:28.034 "zone_append": false, 00:03:28.034 "compare": false, 00:03:28.034 "compare_and_write": false, 00:03:28.034 "abort": true, 00:03:28.034 "seek_hole": false, 00:03:28.034 "seek_data": false, 00:03:28.034 "copy": true, 00:03:28.034 "nvme_iov_md": false 00:03:28.034 }, 00:03:28.034 "memory_domains": [ 00:03:28.034 { 00:03:28.034 "dma_device_id": "system", 00:03:28.034 "dma_device_type": 1 00:03:28.034 }, 00:03:28.034 { 00:03:28.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.034 "dma_device_type": 2 00:03:28.034 } 00:03:28.034 ], 00:03:28.034 "driver_specific": { 00:03:28.034 "passthru": { 00:03:28.034 "name": "Passthru0", 00:03:28.034 "base_bdev_name": "Malloc0" 00:03:28.034 } 00:03:28.034 } 00:03:28.034 } 00:03:28.034 ]' 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.034 10:20:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.034 00:03:28.034 real 0m0.267s 00:03:28.034 user 0m0.167s 00:03:28.034 sys 0m0.036s 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.034 10:20:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 ************************************ 00:03:28.034 END TEST rpc_integrity 00:03:28.034 ************************************ 00:03:28.034 10:20:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:28.034 10:20:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.034 10:20:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.034 10:20:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 ************************************ 00:03:28.034 START TEST rpc_plugins 00:03:28.034 ************************************ 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:28.034 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.034 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:28.034 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.034 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.034 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:28.034 { 00:03:28.034 "name": "Malloc1", 00:03:28.034 "aliases": [ 00:03:28.034 "c1010cdb-fd76-46d4-8209-64090ca1fbe2" 00:03:28.034 ], 00:03:28.034 "product_name": "Malloc disk", 00:03:28.034 "block_size": 4096, 00:03:28.034 "num_blocks": 256, 00:03:28.034 "uuid": "c1010cdb-fd76-46d4-8209-64090ca1fbe2", 00:03:28.034 "assigned_rate_limits": { 00:03:28.034 "rw_ios_per_sec": 0, 00:03:28.034 "rw_mbytes_per_sec": 0, 00:03:28.034 "r_mbytes_per_sec": 0, 00:03:28.034 "w_mbytes_per_sec": 0 00:03:28.034 }, 00:03:28.034 "claimed": false, 00:03:28.034 "zoned": false, 00:03:28.034 "supported_io_types": { 00:03:28.034 "read": true, 00:03:28.034 "write": true, 00:03:28.034 "unmap": true, 00:03:28.034 "flush": true, 00:03:28.034 "reset": true, 00:03:28.034 "nvme_admin": false, 00:03:28.034 "nvme_io": false, 00:03:28.034 "nvme_io_md": false, 00:03:28.034 "write_zeroes": true, 00:03:28.034 "zcopy": true, 00:03:28.034 "get_zone_info": false, 00:03:28.034 "zone_management": false, 00:03:28.034 "zone_append": false, 00:03:28.034 "compare": false, 00:03:28.034 "compare_and_write": false, 00:03:28.034 "abort": true, 00:03:28.034 "seek_hole": false, 00:03:28.034 "seek_data": false, 00:03:28.034 "copy": true, 00:03:28.034 "nvme_iov_md": false 00:03:28.034 }, 00:03:28.034 "memory_domains": [ 00:03:28.034 { 00:03:28.034 "dma_device_id": "system", 00:03:28.034 "dma_device_type": 1 00:03:28.034 }, 00:03:28.034 { 00:03:28.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.034 "dma_device_type": 2 00:03:28.034 } 00:03:28.034 ], 00:03:28.034 "driver_specific": {} 00:03:28.034 } 00:03:28.034 ]' 00:03:28.034 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:28.294 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:28.294 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.294 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.294 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:28.294 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:28.294 10:20:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:28.294 00:03:28.294 real 0m0.143s 00:03:28.294 user 0m0.088s 00:03:28.294 sys 0m0.019s 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.294 10:20:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.294 ************************************ 00:03:28.294 END TEST rpc_plugins 00:03:28.294 ************************************ 00:03:28.294 10:20:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:28.294 10:20:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.294 10:20:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.294 10:20:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.294 ************************************ 00:03:28.294 START TEST rpc_trace_cmd_test 00:03:28.294 ************************************ 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:28.294 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3287872", 00:03:28.294 "tpoint_group_mask": "0x8", 00:03:28.294 "iscsi_conn": { 00:03:28.294 "mask": "0x2", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "scsi": { 00:03:28.294 "mask": "0x4", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "bdev": { 00:03:28.294 "mask": "0x8", 00:03:28.294 "tpoint_mask": "0xffffffffffffffff" 00:03:28.294 }, 00:03:28.294 "nvmf_rdma": { 00:03:28.294 "mask": "0x10", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "nvmf_tcp": { 00:03:28.294 "mask": "0x20", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "ftl": { 00:03:28.294 "mask": "0x40", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "blobfs": { 00:03:28.294 "mask": "0x80", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "dsa": { 00:03:28.294 "mask": "0x200", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "thread": { 00:03:28.294 "mask": "0x400", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "nvme_pcie": { 00:03:28.294 "mask": "0x800", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "iaa": { 00:03:28.294 "mask": "0x1000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "nvme_tcp": { 00:03:28.294 "mask": "0x2000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "bdev_nvme": { 00:03:28.294 "mask": "0x4000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "sock": { 00:03:28.294 "mask": "0x8000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "blob": { 00:03:28.294 "mask": "0x10000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "bdev_raid": { 00:03:28.294 "mask": "0x20000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 }, 00:03:28.294 "scheduler": { 00:03:28.294 "mask": "0x40000", 00:03:28.294 "tpoint_mask": "0x0" 00:03:28.294 } 00:03:28.294 }' 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:28.294 10:20:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:28.553 00:03:28.553 real 0m0.215s 00:03:28.553 user 0m0.181s 00:03:28.553 sys 0m0.027s 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.553 10:20:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:28.553 ************************************ 00:03:28.553 END TEST rpc_trace_cmd_test 00:03:28.553 ************************************ 00:03:28.553 10:20:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:28.553 10:20:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:28.553 10:20:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:28.553 10:20:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.553 10:20:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.553 10:20:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.553 ************************************ 00:03:28.553 START TEST rpc_daemon_integrity 00:03:28.553 ************************************ 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.553 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.812 { 00:03:28.812 "name": "Malloc2", 00:03:28.812 "aliases": [ 00:03:28.812 "e2d7f10f-65ba-4588-a9f0-624ac93cdc12" 00:03:28.812 ], 00:03:28.812 "product_name": "Malloc disk", 00:03:28.812 "block_size": 512, 00:03:28.812 "num_blocks": 16384, 00:03:28.812 "uuid": "e2d7f10f-65ba-4588-a9f0-624ac93cdc12", 00:03:28.812 "assigned_rate_limits": { 00:03:28.812 "rw_ios_per_sec": 0, 00:03:28.812 "rw_mbytes_per_sec": 0, 00:03:28.812 "r_mbytes_per_sec": 0, 00:03:28.812 "w_mbytes_per_sec": 0 00:03:28.812 }, 00:03:28.812 "claimed": false, 00:03:28.812 "zoned": false, 00:03:28.812 "supported_io_types": { 00:03:28.812 "read": true, 00:03:28.812 "write": true, 00:03:28.812 "unmap": true, 00:03:28.812 "flush": true, 00:03:28.812 "reset": true, 00:03:28.812 "nvme_admin": false, 00:03:28.812 "nvme_io": false, 00:03:28.812 "nvme_io_md": false, 00:03:28.812 "write_zeroes": true, 00:03:28.812 "zcopy": true, 00:03:28.812 "get_zone_info": false, 00:03:28.812 "zone_management": false, 00:03:28.812 "zone_append": false, 00:03:28.812 "compare": false, 00:03:28.812 "compare_and_write": false, 00:03:28.812 "abort": true, 00:03:28.812 "seek_hole": false, 00:03:28.812 "seek_data": false, 00:03:28.812 "copy": true, 00:03:28.812 "nvme_iov_md": false 00:03:28.812 }, 00:03:28.812 "memory_domains": [ 00:03:28.812 { 00:03:28.812 "dma_device_id": "system", 00:03:28.812 "dma_device_type": 1 00:03:28.812 }, 00:03:28.812 { 00:03:28.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.812 "dma_device_type": 2 00:03:28.812 } 00:03:28.812 ], 00:03:28.812 "driver_specific": {} 00:03:28.812 } 00:03:28.812 ]' 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.812 [2024-11-20 10:20:29.352601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:28.812 [2024-11-20 10:20:29.352629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.812 [2024-11-20 10:20:29.352641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x665b70 00:03:28.812 [2024-11-20 10:20:29.352648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.812 [2024-11-20 10:20:29.353636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.812 [2024-11-20 10:20:29.353657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.812 Passthru0 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.812 { 00:03:28.812 "name": "Malloc2", 00:03:28.812 "aliases": [ 00:03:28.812 "e2d7f10f-65ba-4588-a9f0-624ac93cdc12" 00:03:28.812 ], 00:03:28.812 "product_name": "Malloc disk", 00:03:28.812 "block_size": 512, 00:03:28.812 "num_blocks": 16384, 00:03:28.812 "uuid": "e2d7f10f-65ba-4588-a9f0-624ac93cdc12", 00:03:28.812 "assigned_rate_limits": { 00:03:28.812 "rw_ios_per_sec": 0, 00:03:28.812 "rw_mbytes_per_sec": 0, 00:03:28.812 "r_mbytes_per_sec": 0, 00:03:28.812 "w_mbytes_per_sec": 0 00:03:28.812 }, 00:03:28.812 "claimed": true, 00:03:28.812 "claim_type": "exclusive_write", 00:03:28.812 "zoned": false, 00:03:28.812 "supported_io_types": { 00:03:28.812 "read": true, 00:03:28.812 "write": true, 00:03:28.812 "unmap": true, 00:03:28.812 "flush": true, 00:03:28.812 "reset": true, 00:03:28.812 "nvme_admin": false, 00:03:28.812 "nvme_io": false, 00:03:28.812 "nvme_io_md": false, 00:03:28.812 "write_zeroes": true, 00:03:28.812 "zcopy": true, 00:03:28.812 "get_zone_info": false, 00:03:28.812 "zone_management": false, 00:03:28.812 "zone_append": false, 00:03:28.812 "compare": false, 00:03:28.812 "compare_and_write": false, 00:03:28.812 "abort": true, 00:03:28.812 "seek_hole": false, 00:03:28.812 "seek_data": false, 00:03:28.812 "copy": true, 00:03:28.812 "nvme_iov_md": false 00:03:28.812 }, 00:03:28.812 "memory_domains": [ 00:03:28.812 { 00:03:28.812 "dma_device_id": "system", 00:03:28.812 "dma_device_type": 1 00:03:28.812 }, 00:03:28.812 { 00:03:28.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.812 "dma_device_type": 2 00:03:28.812 } 00:03:28.812 ], 00:03:28.812 "driver_specific": {} 00:03:28.812 }, 00:03:28.812 { 00:03:28.812 "name": "Passthru0", 00:03:28.812 "aliases": [ 00:03:28.812 "a8cc5c81-10af-50c4-b59f-517278fb4d62" 00:03:28.812 ], 00:03:28.812 "product_name": "passthru", 00:03:28.812 "block_size": 512, 00:03:28.812 "num_blocks": 16384, 00:03:28.812 "uuid": "a8cc5c81-10af-50c4-b59f-517278fb4d62", 00:03:28.812 "assigned_rate_limits": { 00:03:28.812 "rw_ios_per_sec": 0, 00:03:28.812 "rw_mbytes_per_sec": 0, 00:03:28.812 "r_mbytes_per_sec": 0, 00:03:28.812 "w_mbytes_per_sec": 0 00:03:28.812 }, 00:03:28.812 "claimed": false, 00:03:28.812 "zoned": false, 00:03:28.812 "supported_io_types": { 00:03:28.812 "read": true, 00:03:28.812 "write": true, 00:03:28.812 "unmap": true, 00:03:28.812 "flush": true, 00:03:28.812 "reset": true, 00:03:28.812 "nvme_admin": false, 00:03:28.812 "nvme_io": false, 00:03:28.812 "nvme_io_md": false, 00:03:28.812 "write_zeroes": true, 00:03:28.812 "zcopy": true, 00:03:28.812 "get_zone_info": false, 00:03:28.812 "zone_management": false, 00:03:28.812 "zone_append": false, 00:03:28.812 "compare": false, 00:03:28.812 "compare_and_write": false, 00:03:28.812 "abort": true, 00:03:28.812 "seek_hole": false, 00:03:28.812 "seek_data": false, 00:03:28.812 "copy": true, 00:03:28.812 "nvme_iov_md": false 00:03:28.812 }, 00:03:28.812 "memory_domains": [ 00:03:28.812 { 00:03:28.812 "dma_device_id": "system", 00:03:28.812 "dma_device_type": 1 00:03:28.812 }, 00:03:28.812 { 00:03:28.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.812 "dma_device_type": 2 00:03:28.812 } 00:03:28.812 ], 00:03:28.812 "driver_specific": { 00:03:28.812 "passthru": { 00:03:28.812 "name": "Passthru0", 00:03:28.812 "base_bdev_name": "Malloc2" 00:03:28.812 } 00:03:28.812 } 00:03:28.812 } 00:03:28.812 ]' 00:03:28.812 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.813 00:03:28.813 real 0m0.277s 00:03:28.813 user 0m0.169s 00:03:28.813 sys 0m0.044s 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.813 10:20:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.813 ************************************ 00:03:28.813 END TEST rpc_daemon_integrity 00:03:28.813 ************************************ 00:03:28.813 10:20:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:28.813 10:20:29 rpc -- rpc/rpc.sh@84 -- # killprocess 3287872 00:03:28.813 10:20:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 3287872 ']' 00:03:28.813 10:20:29 rpc -- common/autotest_common.sh@958 -- # kill -0 3287872 00:03:28.813 10:20:29 rpc -- common/autotest_common.sh@959 -- # uname 00:03:28.813 10:20:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:28.813 10:20:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287872 00:03:29.071 10:20:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.071 10:20:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.071 10:20:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287872' 00:03:29.071 killing process with pid 3287872 00:03:29.071 10:20:29 rpc -- common/autotest_common.sh@973 -- # kill 3287872 00:03:29.071 10:20:29 rpc -- common/autotest_common.sh@978 -- # wait 3287872 00:03:29.331 00:03:29.331 real 0m2.090s 00:03:29.331 user 0m2.653s 00:03:29.331 sys 0m0.701s 00:03:29.331 10:20:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.331 10:20:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.331 ************************************ 00:03:29.331 END TEST rpc 00:03:29.331 ************************************ 00:03:29.331 10:20:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:29.331 10:20:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.331 10:20:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.331 10:20:29 -- common/autotest_common.sh@10 -- # set +x 00:03:29.331 ************************************ 00:03:29.331 START TEST skip_rpc 00:03:29.331 ************************************ 00:03:29.331 10:20:29 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:29.331 * Looking for test storage... 00:03:29.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:29.331 10:20:30 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:29.331 10:20:30 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:29.331 10:20:30 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:29.590 10:20:30 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:29.590 10:20:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.590 10:20:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.590 10:20:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.590 10:20:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.590 10:20:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.590 10:20:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.591 10:20:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.591 --rc genhtml_branch_coverage=1 00:03:29.591 --rc genhtml_function_coverage=1 00:03:29.591 --rc genhtml_legend=1 00:03:29.591 --rc geninfo_all_blocks=1 00:03:29.591 --rc geninfo_unexecuted_blocks=1 00:03:29.591 00:03:29.591 ' 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.591 --rc genhtml_branch_coverage=1 00:03:29.591 --rc genhtml_function_coverage=1 00:03:29.591 --rc genhtml_legend=1 00:03:29.591 --rc geninfo_all_blocks=1 00:03:29.591 --rc geninfo_unexecuted_blocks=1 00:03:29.591 00:03:29.591 ' 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.591 --rc genhtml_branch_coverage=1 00:03:29.591 --rc genhtml_function_coverage=1 00:03:29.591 --rc genhtml_legend=1 00:03:29.591 --rc geninfo_all_blocks=1 00:03:29.591 --rc geninfo_unexecuted_blocks=1 00:03:29.591 00:03:29.591 ' 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:29.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.591 --rc genhtml_branch_coverage=1 00:03:29.591 --rc genhtml_function_coverage=1 00:03:29.591 --rc genhtml_legend=1 00:03:29.591 --rc geninfo_all_blocks=1 00:03:29.591 --rc geninfo_unexecuted_blocks=1 00:03:29.591 00:03:29.591 ' 00:03:29.591 10:20:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.591 10:20:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:29.591 10:20:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.591 10:20:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.591 ************************************ 00:03:29.591 START TEST skip_rpc 00:03:29.591 ************************************ 00:03:29.591 10:20:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:29.591 10:20:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3288366 00:03:29.591 10:20:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:29.591 10:20:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:29.591 10:20:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:29.591 [2024-11-20 10:20:30.201145] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:29.591 [2024-11-20 10:20:30.201181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288366 ] 00:03:29.591 [2024-11-20 10:20:30.277555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.591 [2024-11-20 10:20:30.318944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3288366 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3288366 ']' 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3288366 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:34.860 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3288366 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3288366' 00:03:34.861 killing process with pid 3288366 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3288366 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3288366 00:03:34.861 00:03:34.861 real 0m5.368s 00:03:34.861 user 0m5.119s 00:03:34.861 sys 0m0.283s 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.861 10:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.861 ************************************ 00:03:34.861 END TEST skip_rpc 00:03:34.861 ************************************ 00:03:34.861 10:20:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:34.861 10:20:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.861 10:20:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.861 10:20:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.119 ************************************ 00:03:35.119 START TEST skip_rpc_with_json 00:03:35.119 ************************************ 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3289318 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3289318 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3289318 ']' 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.119 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.119 [2024-11-20 10:20:35.652345] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:35.119 [2024-11-20 10:20:35.652389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289318 ] 00:03:35.119 [2024-11-20 10:20:35.728413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.119 [2024-11-20 10:20:35.771127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.377 [2024-11-20 10:20:35.993781] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:35.377 request: 00:03:35.377 { 00:03:35.377 "trtype": "tcp", 00:03:35.377 "method": "nvmf_get_transports", 00:03:35.377 "req_id": 1 00:03:35.377 } 00:03:35.377 Got JSON-RPC error response 00:03:35.377 response: 00:03:35.377 { 00:03:35.377 "code": -19, 00:03:35.377 "message": "No such device" 00:03:35.377 } 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.377 10:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.377 [2024-11-20 10:20:36.005891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:35.377 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.377 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:35.377 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.377 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.634 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.634 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.634 { 00:03:35.634 "subsystems": [ 00:03:35.634 { 00:03:35.634 "subsystem": "fsdev", 00:03:35.634 "config": [ 00:03:35.634 { 00:03:35.634 "method": "fsdev_set_opts", 00:03:35.634 "params": { 00:03:35.634 "fsdev_io_pool_size": 65535, 00:03:35.634 "fsdev_io_cache_size": 256 00:03:35.634 } 00:03:35.634 } 00:03:35.634 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "vfio_user_target", 00:03:35.635 "config": null 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "keyring", 00:03:35.635 "config": [] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "iobuf", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "iobuf_set_options", 00:03:35.635 "params": { 00:03:35.635 "small_pool_count": 8192, 00:03:35.635 "large_pool_count": 1024, 00:03:35.635 "small_bufsize": 8192, 00:03:35.635 "large_bufsize": 135168, 00:03:35.635 "enable_numa": false 00:03:35.635 } 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "sock", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "sock_set_default_impl", 00:03:35.635 "params": { 00:03:35.635 "impl_name": "posix" 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "sock_impl_set_options", 00:03:35.635 "params": { 00:03:35.635 "impl_name": "ssl", 00:03:35.635 "recv_buf_size": 4096, 00:03:35.635 "send_buf_size": 4096, 00:03:35.635 "enable_recv_pipe": true, 00:03:35.635 "enable_quickack": false, 00:03:35.635 "enable_placement_id": 0, 00:03:35.635 "enable_zerocopy_send_server": true, 00:03:35.635 "enable_zerocopy_send_client": false, 00:03:35.635 "zerocopy_threshold": 0, 00:03:35.635 "tls_version": 0, 00:03:35.635 "enable_ktls": false 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "sock_impl_set_options", 00:03:35.635 "params": { 00:03:35.635 "impl_name": "posix", 00:03:35.635 "recv_buf_size": 2097152, 00:03:35.635 "send_buf_size": 2097152, 00:03:35.635 "enable_recv_pipe": true, 00:03:35.635 "enable_quickack": false, 00:03:35.635 "enable_placement_id": 0, 00:03:35.635 "enable_zerocopy_send_server": true, 00:03:35.635 "enable_zerocopy_send_client": false, 00:03:35.635 "zerocopy_threshold": 0, 00:03:35.635 "tls_version": 0, 00:03:35.635 "enable_ktls": false 00:03:35.635 } 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "vmd", 00:03:35.635 "config": [] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "accel", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "accel_set_options", 00:03:35.635 "params": { 00:03:35.635 "small_cache_size": 128, 00:03:35.635 "large_cache_size": 16, 00:03:35.635 "task_count": 2048, 00:03:35.635 "sequence_count": 2048, 00:03:35.635 "buf_count": 2048 00:03:35.635 } 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "bdev", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "bdev_set_options", 00:03:35.635 "params": { 00:03:35.635 "bdev_io_pool_size": 65535, 00:03:35.635 "bdev_io_cache_size": 256, 00:03:35.635 "bdev_auto_examine": true, 00:03:35.635 "iobuf_small_cache_size": 128, 00:03:35.635 "iobuf_large_cache_size": 16 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "bdev_raid_set_options", 00:03:35.635 "params": { 00:03:35.635 "process_window_size_kb": 1024, 00:03:35.635 "process_max_bandwidth_mb_sec": 0 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "bdev_iscsi_set_options", 00:03:35.635 "params": { 00:03:35.635 "timeout_sec": 30 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "bdev_nvme_set_options", 00:03:35.635 "params": { 00:03:35.635 "action_on_timeout": "none", 00:03:35.635 "timeout_us": 0, 00:03:35.635 "timeout_admin_us": 0, 00:03:35.635 "keep_alive_timeout_ms": 10000, 00:03:35.635 "arbitration_burst": 0, 00:03:35.635 "low_priority_weight": 0, 00:03:35.635 "medium_priority_weight": 0, 00:03:35.635 "high_priority_weight": 0, 00:03:35.635 "nvme_adminq_poll_period_us": 10000, 00:03:35.635 "nvme_ioq_poll_period_us": 0, 00:03:35.635 "io_queue_requests": 0, 00:03:35.635 "delay_cmd_submit": true, 00:03:35.635 "transport_retry_count": 4, 00:03:35.635 "bdev_retry_count": 3, 00:03:35.635 "transport_ack_timeout": 0, 00:03:35.635 "ctrlr_loss_timeout_sec": 0, 00:03:35.635 "reconnect_delay_sec": 0, 00:03:35.635 "fast_io_fail_timeout_sec": 0, 00:03:35.635 "disable_auto_failback": false, 00:03:35.635 "generate_uuids": false, 00:03:35.635 "transport_tos": 0, 00:03:35.635 "nvme_error_stat": false, 00:03:35.635 "rdma_srq_size": 0, 00:03:35.635 "io_path_stat": false, 00:03:35.635 "allow_accel_sequence": false, 00:03:35.635 "rdma_max_cq_size": 0, 00:03:35.635 "rdma_cm_event_timeout_ms": 0, 00:03:35.635 "dhchap_digests": [ 00:03:35.635 "sha256", 00:03:35.635 "sha384", 00:03:35.635 "sha512" 00:03:35.635 ], 00:03:35.635 "dhchap_dhgroups": [ 00:03:35.635 "null", 00:03:35.635 "ffdhe2048", 00:03:35.635 "ffdhe3072", 00:03:35.635 "ffdhe4096", 00:03:35.635 "ffdhe6144", 00:03:35.635 "ffdhe8192" 00:03:35.635 ] 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "bdev_nvme_set_hotplug", 00:03:35.635 "params": { 00:03:35.635 "period_us": 100000, 00:03:35.635 "enable": false 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "bdev_wait_for_examine" 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "scsi", 00:03:35.635 "config": null 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "scheduler", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "framework_set_scheduler", 00:03:35.635 "params": { 00:03:35.635 "name": "static" 00:03:35.635 } 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "vhost_scsi", 00:03:35.635 "config": [] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "vhost_blk", 00:03:35.635 "config": [] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "ublk", 00:03:35.635 "config": [] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "nbd", 00:03:35.635 "config": [] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "nvmf", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "nvmf_set_config", 00:03:35.635 "params": { 00:03:35.635 "discovery_filter": "match_any", 00:03:35.635 "admin_cmd_passthru": { 00:03:35.635 "identify_ctrlr": false 00:03:35.635 }, 00:03:35.635 "dhchap_digests": [ 00:03:35.635 "sha256", 00:03:35.635 "sha384", 00:03:35.635 "sha512" 00:03:35.635 ], 00:03:35.635 "dhchap_dhgroups": [ 00:03:35.635 "null", 00:03:35.635 "ffdhe2048", 00:03:35.635 "ffdhe3072", 00:03:35.635 "ffdhe4096", 00:03:35.635 "ffdhe6144", 00:03:35.635 "ffdhe8192" 00:03:35.635 ] 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "nvmf_set_max_subsystems", 00:03:35.635 "params": { 00:03:35.635 "max_subsystems": 1024 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "nvmf_set_crdt", 00:03:35.635 "params": { 00:03:35.635 "crdt1": 0, 00:03:35.635 "crdt2": 0, 00:03:35.635 "crdt3": 0 00:03:35.635 } 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "method": "nvmf_create_transport", 00:03:35.635 "params": { 00:03:35.635 "trtype": "TCP", 00:03:35.635 "max_queue_depth": 128, 00:03:35.635 "max_io_qpairs_per_ctrlr": 127, 00:03:35.635 "in_capsule_data_size": 4096, 00:03:35.635 "max_io_size": 131072, 00:03:35.635 "io_unit_size": 131072, 00:03:35.635 "max_aq_depth": 128, 00:03:35.635 "num_shared_buffers": 511, 00:03:35.635 "buf_cache_size": 4294967295, 00:03:35.635 "dif_insert_or_strip": false, 00:03:35.635 "zcopy": false, 00:03:35.635 "c2h_success": true, 00:03:35.635 "sock_priority": 0, 00:03:35.635 "abort_timeout_sec": 1, 00:03:35.635 "ack_timeout": 0, 00:03:35.635 "data_wr_pool_size": 0 00:03:35.635 } 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 }, 00:03:35.635 { 00:03:35.635 "subsystem": "iscsi", 00:03:35.635 "config": [ 00:03:35.635 { 00:03:35.635 "method": "iscsi_set_options", 00:03:35.635 "params": { 00:03:35.635 "node_base": "iqn.2016-06.io.spdk", 00:03:35.635 "max_sessions": 128, 00:03:35.635 "max_connections_per_session": 2, 00:03:35.635 "max_queue_depth": 64, 00:03:35.635 "default_time2wait": 2, 00:03:35.635 "default_time2retain": 20, 00:03:35.635 "first_burst_length": 8192, 00:03:35.635 "immediate_data": true, 00:03:35.635 "allow_duplicated_isid": false, 00:03:35.635 "error_recovery_level": 0, 00:03:35.635 "nop_timeout": 60, 00:03:35.635 "nop_in_interval": 30, 00:03:35.635 "disable_chap": false, 00:03:35.635 "require_chap": false, 00:03:35.635 "mutual_chap": false, 00:03:35.635 "chap_group": 0, 00:03:35.635 "max_large_datain_per_connection": 64, 00:03:35.635 "max_r2t_per_connection": 4, 00:03:35.635 "pdu_pool_size": 36864, 00:03:35.635 "immediate_data_pool_size": 16384, 00:03:35.635 "data_out_pool_size": 2048 00:03:35.635 } 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 } 00:03:35.635 ] 00:03:35.635 } 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3289318 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3289318 ']' 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3289318 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3289318 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3289318' 00:03:35.635 killing process with pid 3289318 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3289318 00:03:35.635 10:20:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3289318 00:03:35.893 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3289546 00:03:35.893 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.893 10:20:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3289546 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3289546 ']' 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3289546 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3289546 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3289546' 00:03:41.156 killing process with pid 3289546 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3289546 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3289546 00:03:41.156 10:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.416 00:03:41.416 real 0m6.291s 00:03:41.416 user 0m5.989s 00:03:41.416 sys 0m0.606s 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.416 ************************************ 00:03:41.416 END TEST skip_rpc_with_json 00:03:41.416 ************************************ 00:03:41.416 10:20:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:41.416 10:20:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.416 10:20:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.416 10:20:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.416 ************************************ 00:03:41.416 START TEST skip_rpc_with_delay 00:03:41.416 ************************************ 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.416 10:20:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.416 [2024-11-20 10:20:42.023107] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:41.416 10:20:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:41.416 10:20:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:41.416 10:20:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:41.416 10:20:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:41.416 00:03:41.416 real 0m0.072s 00:03:41.416 user 0m0.048s 00:03:41.416 sys 0m0.023s 00:03:41.416 10:20:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.416 10:20:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:41.416 ************************************ 00:03:41.416 END TEST skip_rpc_with_delay 00:03:41.416 ************************************ 00:03:41.416 10:20:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:41.416 10:20:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:41.416 10:20:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:41.416 10:20:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.416 10:20:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.416 10:20:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.416 ************************************ 00:03:41.416 START TEST exit_on_failed_rpc_init 00:03:41.416 ************************************ 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3290523 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3290523 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3290523 ']' 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:41.416 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.676 [2024-11-20 10:20:42.167285] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:41.676 [2024-11-20 10:20:42.167330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290523 ] 00:03:41.676 [2024-11-20 10:20:42.244336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.676 [2024-11-20 10:20:42.287315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.934 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.934 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:41.934 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.934 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.934 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:41.934 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.935 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.935 [2024-11-20 10:20:42.559443] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:41.935 [2024-11-20 10:20:42.559489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290531 ] 00:03:41.935 [2024-11-20 10:20:42.631027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.195 [2024-11-20 10:20:42.672976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:42.195 [2024-11-20 10:20:42.673029] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:42.195 [2024-11-20 10:20:42.673038] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:42.195 [2024-11-20 10:20:42.673046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3290523 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3290523 ']' 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3290523 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3290523 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3290523' 00:03:42.195 killing process with pid 3290523 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3290523 00:03:42.195 10:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3290523 00:03:42.455 00:03:42.455 real 0m0.959s 00:03:42.455 user 0m1.013s 00:03:42.455 sys 0m0.403s 00:03:42.455 10:20:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.455 10:20:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:42.455 ************************************ 00:03:42.455 END TEST exit_on_failed_rpc_init 00:03:42.455 ************************************ 00:03:42.455 10:20:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.455 00:03:42.455 real 0m13.160s 00:03:42.455 user 0m12.383s 00:03:42.455 sys 0m1.602s 00:03:42.455 10:20:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.455 10:20:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.455 ************************************ 00:03:42.455 END TEST skip_rpc 00:03:42.455 ************************************ 00:03:42.455 10:20:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.455 10:20:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.455 10:20:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.455 10:20:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.455 ************************************ 00:03:42.455 START TEST rpc_client 00:03:42.455 ************************************ 00:03:42.455 10:20:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.714 * Looking for test storage... 00:03:42.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.714 10:20:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.714 10:20:43 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.715 10:20:43 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:42.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.715 --rc genhtml_branch_coverage=1 00:03:42.715 --rc genhtml_function_coverage=1 00:03:42.715 --rc genhtml_legend=1 00:03:42.715 --rc geninfo_all_blocks=1 00:03:42.715 --rc geninfo_unexecuted_blocks=1 00:03:42.715 00:03:42.715 ' 00:03:42.715 10:20:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:42.715 OK 00:03:42.715 10:20:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:42.715 00:03:42.715 real 0m0.197s 00:03:42.715 user 0m0.120s 00:03:42.715 sys 0m0.089s 00:03:42.715 10:20:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.715 10:20:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:42.715 ************************************ 00:03:42.715 END TEST rpc_client 00:03:42.715 ************************************ 00:03:42.715 10:20:43 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.715 10:20:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.715 10:20:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.715 10:20:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.715 ************************************ 00:03:42.715 START TEST json_config 00:03:42.715 ************************************ 00:03:42.715 10:20:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.974 10:20:43 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:42.974 10:20:43 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:42.974 10:20:43 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:42.974 10:20:43 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.974 10:20:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.974 10:20:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.974 10:20:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.974 10:20:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.974 10:20:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.974 10:20:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:42.974 10:20:43 json_config -- scripts/common.sh@345 -- # : 1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.974 10:20:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.974 10:20:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@353 -- # local d=1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.974 10:20:43 json_config -- scripts/common.sh@355 -- # echo 1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.974 10:20:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@353 -- # local d=2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.974 10:20:43 json_config -- scripts/common.sh@355 -- # echo 2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.974 10:20:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.974 10:20:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.974 10:20:43 json_config -- scripts/common.sh@368 -- # return 0 00:03:42.974 10:20:43 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.974 10:20:43 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:42.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.974 --rc genhtml_branch_coverage=1 00:03:42.974 --rc genhtml_function_coverage=1 00:03:42.974 --rc genhtml_legend=1 00:03:42.975 --rc geninfo_all_blocks=1 00:03:42.975 --rc geninfo_unexecuted_blocks=1 00:03:42.975 00:03:42.975 ' 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:42.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.975 --rc genhtml_branch_coverage=1 00:03:42.975 --rc genhtml_function_coverage=1 00:03:42.975 --rc genhtml_legend=1 00:03:42.975 --rc geninfo_all_blocks=1 00:03:42.975 --rc geninfo_unexecuted_blocks=1 00:03:42.975 00:03:42.975 ' 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:42.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.975 --rc genhtml_branch_coverage=1 00:03:42.975 --rc genhtml_function_coverage=1 00:03:42.975 --rc genhtml_legend=1 00:03:42.975 --rc geninfo_all_blocks=1 00:03:42.975 --rc geninfo_unexecuted_blocks=1 00:03:42.975 00:03:42.975 ' 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:42.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.975 --rc genhtml_branch_coverage=1 00:03:42.975 --rc genhtml_function_coverage=1 00:03:42.975 --rc genhtml_legend=1 00:03:42.975 --rc geninfo_all_blocks=1 00:03:42.975 --rc geninfo_unexecuted_blocks=1 00:03:42.975 00:03:42.975 ' 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.975 10:20:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:42.975 10:20:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.975 10:20:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.975 10:20:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.975 10:20:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.975 10:20:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.975 10:20:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.975 10:20:43 json_config -- paths/export.sh@5 -- # export PATH 00:03:42.975 10:20:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@51 -- # : 0 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:42.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:42.975 10:20:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:42.975 INFO: JSON configuration test init 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.975 10:20:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:42.975 10:20:43 json_config -- json_config/common.sh@9 -- # local app=target 00:03:42.975 10:20:43 json_config -- json_config/common.sh@10 -- # shift 00:03:42.975 10:20:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:42.975 10:20:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:42.975 10:20:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:42.975 10:20:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.975 10:20:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.975 10:20:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3290884 00:03:42.975 10:20:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:42.975 Waiting for target to run... 00:03:42.975 10:20:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3290884 /var/tmp/spdk_tgt.sock 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 3290884 ']' 00:03:42.975 10:20:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.975 10:20:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.976 10:20:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.976 10:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.976 [2024-11-20 10:20:43.690797] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:42.976 [2024-11-20 10:20:43.690846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290884 ] 00:03:43.542 [2024-11-20 10:20:44.140516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.542 [2024-11-20 10:20:44.199881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.800 10:20:44 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:43.800 10:20:44 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:43.800 10:20:44 json_config -- json_config/common.sh@26 -- # echo '' 00:03:43.800 00:03:44.060 10:20:44 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:44.060 10:20:44 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:44.060 10:20:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.060 10:20:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.060 10:20:44 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:44.060 10:20:44 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:44.060 10:20:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:44.060 10:20:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.060 10:20:44 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:44.060 10:20:44 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:44.060 10:20:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:47.350 10:20:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.350 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:47.350 10:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@54 -- # sort 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:47.350 10:20:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.350 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:47.350 10:20:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.350 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:47.350 10:20:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.350 10:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.609 MallocForNvmf0 00:03:47.609 10:20:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:47.609 10:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:47.609 MallocForNvmf1 00:03:47.609 10:20:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.609 10:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.867 [2024-11-20 10:20:48.491978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.867 10:20:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:47.867 10:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.125 10:20:48 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.125 10:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.384 10:20:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.384 10:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.642 10:20:49 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.642 10:20:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.642 [2024-11-20 10:20:49.286461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:48.642 10:20:49 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:48.642 10:20:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.642 10:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.642 10:20:49 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:48.642 10:20:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.642 10:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.901 10:20:49 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:48.901 10:20:49 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:48.901 10:20:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:48.901 MallocBdevForConfigChangeCheck 00:03:48.901 10:20:49 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:48.901 10:20:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.901 10:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.901 10:20:49 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:48.901 10:20:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.470 10:20:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:49.470 INFO: shutting down applications... 00:03:49.470 10:20:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:49.470 10:20:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:49.470 10:20:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:49.470 10:20:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:50.845 Calling clear_iscsi_subsystem 00:03:50.845 Calling clear_nvmf_subsystem 00:03:50.845 Calling clear_nbd_subsystem 00:03:50.845 Calling clear_ublk_subsystem 00:03:50.845 Calling clear_vhost_blk_subsystem 00:03:50.845 Calling clear_vhost_scsi_subsystem 00:03:50.845 Calling clear_bdev_subsystem 00:03:50.845 10:20:51 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:50.845 10:20:51 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:50.845 10:20:51 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:50.845 10:20:51 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.845 10:20:51 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:50.845 10:20:51 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:51.412 10:20:51 json_config -- json_config/json_config.sh@352 -- # break 00:03:51.412 10:20:51 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:51.412 10:20:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:51.412 10:20:51 json_config -- json_config/common.sh@31 -- # local app=target 00:03:51.412 10:20:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:51.412 10:20:51 json_config -- json_config/common.sh@35 -- # [[ -n 3290884 ]] 00:03:51.412 10:20:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3290884 00:03:51.412 10:20:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:51.412 10:20:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.412 10:20:51 json_config -- json_config/common.sh@41 -- # kill -0 3290884 00:03:51.412 10:20:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:51.980 10:20:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:51.980 10:20:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.980 10:20:52 json_config -- json_config/common.sh@41 -- # kill -0 3290884 00:03:51.980 10:20:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:51.980 10:20:52 json_config -- json_config/common.sh@43 -- # break 00:03:51.980 10:20:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:51.980 10:20:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:51.980 SPDK target shutdown done 00:03:51.980 10:20:52 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:51.980 INFO: relaunching applications... 00:03:51.980 10:20:52 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.980 10:20:52 json_config -- json_config/common.sh@9 -- # local app=target 00:03:51.980 10:20:52 json_config -- json_config/common.sh@10 -- # shift 00:03:51.980 10:20:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:51.980 10:20:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:51.980 10:20:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:51.980 10:20:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:51.980 10:20:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:51.980 10:20:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3292415 00:03:51.980 10:20:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:51.980 Waiting for target to run... 00:03:51.980 10:20:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.980 10:20:52 json_config -- json_config/common.sh@25 -- # waitforlisten 3292415 /var/tmp/spdk_tgt.sock 00:03:51.980 10:20:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 3292415 ']' 00:03:51.980 10:20:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:51.980 10:20:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.980 10:20:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:51.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:51.980 10:20:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.980 10:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.980 [2024-11-20 10:20:52.478387] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:51.980 [2024-11-20 10:20:52.478449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292415 ] 00:03:52.239 [2024-11-20 10:20:52.941990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.498 [2024-11-20 10:20:53.000587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.787 [2024-11-20 10:20:56.028753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.787 [2024-11-20 10:20:56.061123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:56.046 10:20:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:56.046 10:20:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:56.046 10:20:56 json_config -- json_config/common.sh@26 -- # echo '' 00:03:56.046 00:03:56.046 10:20:56 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:56.046 10:20:56 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:56.046 INFO: Checking if target configuration is the same... 00:03:56.046 10:20:56 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.046 10:20:56 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:56.046 10:20:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.046 + '[' 2 -ne 2 ']' 00:03:56.046 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.046 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.046 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.046 +++ basename /dev/fd/62 00:03:56.046 ++ mktemp /tmp/62.XXX 00:03:56.046 + tmp_file_1=/tmp/62.bPD 00:03:56.046 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.046 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.046 + tmp_file_2=/tmp/spdk_tgt_config.json.eRZ 00:03:56.046 + ret=0 00:03:56.046 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.613 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.613 + diff -u /tmp/62.bPD /tmp/spdk_tgt_config.json.eRZ 00:03:56.613 + echo 'INFO: JSON config files are the same' 00:03:56.613 INFO: JSON config files are the same 00:03:56.613 + rm /tmp/62.bPD /tmp/spdk_tgt_config.json.eRZ 00:03:56.613 + exit 0 00:03:56.613 10:20:57 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:56.613 10:20:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.613 INFO: changing configuration and checking if this can be detected... 00:03:56.613 10:20:57 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.614 10:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.614 10:20:57 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.614 10:20:57 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:56.614 10:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.614 + '[' 2 -ne 2 ']' 00:03:56.614 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.614 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.614 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.614 +++ basename /dev/fd/62 00:03:56.614 ++ mktemp /tmp/62.XXX 00:03:56.614 + tmp_file_1=/tmp/62.ntO 00:03:56.614 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.614 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.614 + tmp_file_2=/tmp/spdk_tgt_config.json.bYm 00:03:56.614 + ret=0 00:03:56.614 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.181 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.181 + diff -u /tmp/62.ntO /tmp/spdk_tgt_config.json.bYm 00:03:57.181 + ret=1 00:03:57.181 + echo '=== Start of file: /tmp/62.ntO ===' 00:03:57.181 + cat /tmp/62.ntO 00:03:57.181 + echo '=== End of file: /tmp/62.ntO ===' 00:03:57.181 + echo '' 00:03:57.181 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bYm ===' 00:03:57.181 + cat /tmp/spdk_tgt_config.json.bYm 00:03:57.181 + echo '=== End of file: /tmp/spdk_tgt_config.json.bYm ===' 00:03:57.181 + echo '' 00:03:57.181 + rm /tmp/62.ntO /tmp/spdk_tgt_config.json.bYm 00:03:57.181 + exit 1 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:57.181 INFO: configuration change detected. 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@324 -- # [[ -n 3292415 ]] 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.181 10:20:57 json_config -- json_config/json_config.sh@330 -- # killprocess 3292415 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@954 -- # '[' -z 3292415 ']' 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@958 -- # kill -0 3292415 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@959 -- # uname 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292415 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292415' 00:03:57.181 killing process with pid 3292415 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@973 -- # kill 3292415 00:03:57.181 10:20:57 json_config -- common/autotest_common.sh@978 -- # wait 3292415 00:03:58.558 10:20:59 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.558 10:20:59 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:58.558 10:20:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.558 10:20:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.818 10:20:59 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:58.818 10:20:59 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:58.818 INFO: Success 00:03:58.818 00:03:58.818 real 0m15.884s 00:03:58.818 user 0m16.327s 00:03:58.818 sys 0m2.813s 00:03:58.818 10:20:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.818 10:20:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.818 ************************************ 00:03:58.818 END TEST json_config 00:03:58.818 ************************************ 00:03:58.818 10:20:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.818 10:20:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.818 10:20:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.818 10:20:59 -- common/autotest_common.sh@10 -- # set +x 00:03:58.818 ************************************ 00:03:58.818 START TEST json_config_extra_key 00:03:58.818 ************************************ 00:03:58.818 10:20:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.819 10:20:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:58.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.819 --rc genhtml_branch_coverage=1 00:03:58.819 --rc genhtml_function_coverage=1 00:03:58.819 --rc genhtml_legend=1 00:03:58.819 --rc geninfo_all_blocks=1 00:03:58.819 --rc geninfo_unexecuted_blocks=1 00:03:58.819 00:03:58.819 ' 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:58.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.819 --rc genhtml_branch_coverage=1 00:03:58.819 --rc genhtml_function_coverage=1 00:03:58.819 --rc genhtml_legend=1 00:03:58.819 --rc geninfo_all_blocks=1 00:03:58.819 --rc geninfo_unexecuted_blocks=1 00:03:58.819 00:03:58.819 ' 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:58.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.819 --rc genhtml_branch_coverage=1 00:03:58.819 --rc genhtml_function_coverage=1 00:03:58.819 --rc genhtml_legend=1 00:03:58.819 --rc geninfo_all_blocks=1 00:03:58.819 --rc geninfo_unexecuted_blocks=1 00:03:58.819 00:03:58.819 ' 00:03:58.819 10:20:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:58.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.819 --rc genhtml_branch_coverage=1 00:03:58.819 --rc genhtml_function_coverage=1 00:03:58.819 --rc genhtml_legend=1 00:03:58.819 --rc geninfo_all_blocks=1 00:03:58.819 --rc geninfo_unexecuted_blocks=1 00:03:58.819 00:03:58.819 ' 00:03:58.819 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:59.080 10:20:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:59.080 10:20:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.080 10:20:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.080 10:20:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.080 10:20:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.080 10:20:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.080 10:20:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.080 10:20:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:59.080 10:20:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:59.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:59.080 10:20:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:59.080 INFO: launching applications... 00:03:59.080 10:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3293838 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:59.080 Waiting for target to run... 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3293838 /var/tmp/spdk_tgt.sock 00:03:59.080 10:20:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3293838 ']' 00:03:59.080 10:20:59 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:59.080 10:20:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:59.080 10:20:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.080 10:20:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:59.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:59.080 10:20:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.080 10:20:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:59.080 [2024-11-20 10:20:59.634426] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:03:59.080 [2024-11-20 10:20:59.634477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293838 ] 00:03:59.648 [2024-11-20 10:21:00.094447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.648 [2024-11-20 10:21:00.142841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.906 10:21:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.906 10:21:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:59.906 00:03:59.906 10:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:59.906 INFO: shutting down applications... 00:03:59.906 10:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3293838 ]] 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3293838 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3293838 00:03:59.906 10:21:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3293838 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:00.474 10:21:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:00.474 SPDK target shutdown done 00:04:00.474 10:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:00.474 Success 00:04:00.474 00:04:00.474 real 0m1.595s 00:04:00.474 user 0m1.205s 00:04:00.474 sys 0m0.584s 00:04:00.474 10:21:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.474 10:21:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 ************************************ 00:04:00.474 END TEST json_config_extra_key 00:04:00.474 ************************************ 00:04:00.474 10:21:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.474 10:21:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.474 10:21:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.474 10:21:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 ************************************ 00:04:00.474 START TEST alias_rpc 00:04:00.474 ************************************ 00:04:00.474 10:21:01 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.474 * Looking for test storage... 00:04:00.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:00.474 10:21:01 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.474 10:21:01 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.474 10:21:01 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.733 10:21:01 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.734 10:21:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.734 --rc genhtml_branch_coverage=1 00:04:00.734 --rc genhtml_function_coverage=1 00:04:00.734 --rc genhtml_legend=1 00:04:00.734 --rc geninfo_all_blocks=1 00:04:00.734 --rc geninfo_unexecuted_blocks=1 00:04:00.734 00:04:00.734 ' 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.734 --rc genhtml_branch_coverage=1 00:04:00.734 --rc genhtml_function_coverage=1 00:04:00.734 --rc genhtml_legend=1 00:04:00.734 --rc geninfo_all_blocks=1 00:04:00.734 --rc geninfo_unexecuted_blocks=1 00:04:00.734 00:04:00.734 ' 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.734 --rc genhtml_branch_coverage=1 00:04:00.734 --rc genhtml_function_coverage=1 00:04:00.734 --rc genhtml_legend=1 00:04:00.734 --rc geninfo_all_blocks=1 00:04:00.734 --rc geninfo_unexecuted_blocks=1 00:04:00.734 00:04:00.734 ' 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.734 --rc genhtml_branch_coverage=1 00:04:00.734 --rc genhtml_function_coverage=1 00:04:00.734 --rc genhtml_legend=1 00:04:00.734 --rc geninfo_all_blocks=1 00:04:00.734 --rc geninfo_unexecuted_blocks=1 00:04:00.734 00:04:00.734 ' 00:04:00.734 10:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.734 10:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3294271 00:04:00.734 10:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3294271 00:04:00.734 10:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3294271 ']' 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.734 10:21:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.734 [2024-11-20 10:21:01.286128] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:00.734 [2024-11-20 10:21:01.286178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294271 ] 00:04:00.734 [2024-11-20 10:21:01.360401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.734 [2024-11-20 10:21:01.402995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.993 10:21:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.993 10:21:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.993 10:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:01.252 10:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3294271 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3294271 ']' 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3294271 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294271 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294271' 00:04:01.252 killing process with pid 3294271 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 3294271 00:04:01.252 10:21:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 3294271 00:04:01.511 00:04:01.511 real 0m1.148s 00:04:01.511 user 0m1.181s 00:04:01.511 sys 0m0.419s 00:04:01.511 10:21:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.511 10:21:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.511 ************************************ 00:04:01.511 END TEST alias_rpc 00:04:01.511 ************************************ 00:04:01.770 10:21:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:01.770 10:21:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.770 10:21:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.770 10:21:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.770 10:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:01.770 ************************************ 00:04:01.770 START TEST spdkcli_tcp 00:04:01.770 ************************************ 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.770 * Looking for test storage... 00:04:01.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.770 10:21:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.770 --rc genhtml_branch_coverage=1 00:04:01.770 --rc genhtml_function_coverage=1 00:04:01.770 --rc genhtml_legend=1 00:04:01.770 --rc geninfo_all_blocks=1 00:04:01.770 --rc geninfo_unexecuted_blocks=1 00:04:01.770 00:04:01.770 ' 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.770 --rc genhtml_branch_coverage=1 00:04:01.770 --rc genhtml_function_coverage=1 00:04:01.770 --rc genhtml_legend=1 00:04:01.770 --rc geninfo_all_blocks=1 00:04:01.770 --rc geninfo_unexecuted_blocks=1 00:04:01.770 00:04:01.770 ' 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.770 --rc genhtml_branch_coverage=1 00:04:01.770 --rc genhtml_function_coverage=1 00:04:01.770 --rc genhtml_legend=1 00:04:01.770 --rc geninfo_all_blocks=1 00:04:01.770 --rc geninfo_unexecuted_blocks=1 00:04:01.770 00:04:01.770 ' 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.770 --rc genhtml_branch_coverage=1 00:04:01.770 --rc genhtml_function_coverage=1 00:04:01.770 --rc genhtml_legend=1 00:04:01.770 --rc geninfo_all_blocks=1 00:04:01.770 --rc geninfo_unexecuted_blocks=1 00:04:01.770 00:04:01.770 ' 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3294608 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3294608 00:04:01.770 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3294608 ']' 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.770 10:21:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.771 10:21:02 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.771 10:21:02 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.771 10:21:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.030 [2024-11-20 10:21:02.507058] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:02.030 [2024-11-20 10:21:02.507109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294608 ] 00:04:02.030 [2024-11-20 10:21:02.582410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.030 [2024-11-20 10:21:02.626744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.030 [2024-11-20 10:21:02.626745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.288 10:21:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.288 10:21:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:02.288 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3294612 00:04:02.288 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:02.288 10:21:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:02.288 [ 00:04:02.288 "bdev_malloc_delete", 00:04:02.288 "bdev_malloc_create", 00:04:02.288 "bdev_null_resize", 00:04:02.288 "bdev_null_delete", 00:04:02.288 "bdev_null_create", 00:04:02.288 "bdev_nvme_cuse_unregister", 00:04:02.288 "bdev_nvme_cuse_register", 00:04:02.288 "bdev_opal_new_user", 00:04:02.288 "bdev_opal_set_lock_state", 00:04:02.288 "bdev_opal_delete", 00:04:02.288 "bdev_opal_get_info", 00:04:02.288 "bdev_opal_create", 00:04:02.288 "bdev_nvme_opal_revert", 00:04:02.288 "bdev_nvme_opal_init", 00:04:02.288 "bdev_nvme_send_cmd", 00:04:02.288 "bdev_nvme_set_keys", 00:04:02.288 "bdev_nvme_get_path_iostat", 00:04:02.288 "bdev_nvme_get_mdns_discovery_info", 00:04:02.288 "bdev_nvme_stop_mdns_discovery", 00:04:02.288 "bdev_nvme_start_mdns_discovery", 00:04:02.288 "bdev_nvme_set_multipath_policy", 00:04:02.288 "bdev_nvme_set_preferred_path", 00:04:02.288 "bdev_nvme_get_io_paths", 00:04:02.288 "bdev_nvme_remove_error_injection", 00:04:02.288 "bdev_nvme_add_error_injection", 00:04:02.288 "bdev_nvme_get_discovery_info", 00:04:02.288 "bdev_nvme_stop_discovery", 00:04:02.288 "bdev_nvme_start_discovery", 00:04:02.288 "bdev_nvme_get_controller_health_info", 00:04:02.288 "bdev_nvme_disable_controller", 00:04:02.288 "bdev_nvme_enable_controller", 00:04:02.288 "bdev_nvme_reset_controller", 00:04:02.288 "bdev_nvme_get_transport_statistics", 00:04:02.288 "bdev_nvme_apply_firmware", 00:04:02.288 "bdev_nvme_detach_controller", 00:04:02.288 "bdev_nvme_get_controllers", 00:04:02.288 "bdev_nvme_attach_controller", 00:04:02.288 "bdev_nvme_set_hotplug", 00:04:02.288 "bdev_nvme_set_options", 00:04:02.288 "bdev_passthru_delete", 00:04:02.288 "bdev_passthru_create", 00:04:02.288 "bdev_lvol_set_parent_bdev", 00:04:02.288 "bdev_lvol_set_parent", 00:04:02.288 "bdev_lvol_check_shallow_copy", 00:04:02.288 "bdev_lvol_start_shallow_copy", 00:04:02.288 "bdev_lvol_grow_lvstore", 00:04:02.288 "bdev_lvol_get_lvols", 00:04:02.288 "bdev_lvol_get_lvstores", 00:04:02.288 "bdev_lvol_delete", 00:04:02.288 "bdev_lvol_set_read_only", 00:04:02.288 "bdev_lvol_resize", 00:04:02.288 "bdev_lvol_decouple_parent", 00:04:02.288 "bdev_lvol_inflate", 00:04:02.288 "bdev_lvol_rename", 00:04:02.288 "bdev_lvol_clone_bdev", 00:04:02.288 "bdev_lvol_clone", 00:04:02.288 "bdev_lvol_snapshot", 00:04:02.288 "bdev_lvol_create", 00:04:02.288 "bdev_lvol_delete_lvstore", 00:04:02.288 "bdev_lvol_rename_lvstore", 00:04:02.288 "bdev_lvol_create_lvstore", 00:04:02.288 "bdev_raid_set_options", 00:04:02.288 "bdev_raid_remove_base_bdev", 00:04:02.288 "bdev_raid_add_base_bdev", 00:04:02.288 "bdev_raid_delete", 00:04:02.288 "bdev_raid_create", 00:04:02.288 "bdev_raid_get_bdevs", 00:04:02.288 "bdev_error_inject_error", 00:04:02.288 "bdev_error_delete", 00:04:02.288 "bdev_error_create", 00:04:02.288 "bdev_split_delete", 00:04:02.288 "bdev_split_create", 00:04:02.288 "bdev_delay_delete", 00:04:02.288 "bdev_delay_create", 00:04:02.288 "bdev_delay_update_latency", 00:04:02.288 "bdev_zone_block_delete", 00:04:02.288 "bdev_zone_block_create", 00:04:02.288 "blobfs_create", 00:04:02.288 "blobfs_detect", 00:04:02.288 "blobfs_set_cache_size", 00:04:02.288 "bdev_aio_delete", 00:04:02.288 "bdev_aio_rescan", 00:04:02.288 "bdev_aio_create", 00:04:02.288 "bdev_ftl_set_property", 00:04:02.288 "bdev_ftl_get_properties", 00:04:02.288 "bdev_ftl_get_stats", 00:04:02.288 "bdev_ftl_unmap", 00:04:02.288 "bdev_ftl_unload", 00:04:02.288 "bdev_ftl_delete", 00:04:02.288 "bdev_ftl_load", 00:04:02.288 "bdev_ftl_create", 00:04:02.289 "bdev_virtio_attach_controller", 00:04:02.289 "bdev_virtio_scsi_get_devices", 00:04:02.289 "bdev_virtio_detach_controller", 00:04:02.289 "bdev_virtio_blk_set_hotplug", 00:04:02.289 "bdev_iscsi_delete", 00:04:02.289 "bdev_iscsi_create", 00:04:02.289 "bdev_iscsi_set_options", 00:04:02.289 "accel_error_inject_error", 00:04:02.289 "ioat_scan_accel_module", 00:04:02.289 "dsa_scan_accel_module", 00:04:02.289 "iaa_scan_accel_module", 00:04:02.289 "vfu_virtio_create_fs_endpoint", 00:04:02.289 "vfu_virtio_create_scsi_endpoint", 00:04:02.289 "vfu_virtio_scsi_remove_target", 00:04:02.289 "vfu_virtio_scsi_add_target", 00:04:02.289 "vfu_virtio_create_blk_endpoint", 00:04:02.289 "vfu_virtio_delete_endpoint", 00:04:02.289 "keyring_file_remove_key", 00:04:02.289 "keyring_file_add_key", 00:04:02.289 "keyring_linux_set_options", 00:04:02.289 "fsdev_aio_delete", 00:04:02.289 "fsdev_aio_create", 00:04:02.289 "iscsi_get_histogram", 00:04:02.289 "iscsi_enable_histogram", 00:04:02.289 "iscsi_set_options", 00:04:02.289 "iscsi_get_auth_groups", 00:04:02.289 "iscsi_auth_group_remove_secret", 00:04:02.289 "iscsi_auth_group_add_secret", 00:04:02.289 "iscsi_delete_auth_group", 00:04:02.289 "iscsi_create_auth_group", 00:04:02.289 "iscsi_set_discovery_auth", 00:04:02.289 "iscsi_get_options", 00:04:02.289 "iscsi_target_node_request_logout", 00:04:02.289 "iscsi_target_node_set_redirect", 00:04:02.289 "iscsi_target_node_set_auth", 00:04:02.289 "iscsi_target_node_add_lun", 00:04:02.289 "iscsi_get_stats", 00:04:02.289 "iscsi_get_connections", 00:04:02.289 "iscsi_portal_group_set_auth", 00:04:02.289 "iscsi_start_portal_group", 00:04:02.289 "iscsi_delete_portal_group", 00:04:02.289 "iscsi_create_portal_group", 00:04:02.289 "iscsi_get_portal_groups", 00:04:02.289 "iscsi_delete_target_node", 00:04:02.289 "iscsi_target_node_remove_pg_ig_maps", 00:04:02.289 "iscsi_target_node_add_pg_ig_maps", 00:04:02.289 "iscsi_create_target_node", 00:04:02.289 "iscsi_get_target_nodes", 00:04:02.289 "iscsi_delete_initiator_group", 00:04:02.289 "iscsi_initiator_group_remove_initiators", 00:04:02.289 "iscsi_initiator_group_add_initiators", 00:04:02.289 "iscsi_create_initiator_group", 00:04:02.289 "iscsi_get_initiator_groups", 00:04:02.289 "nvmf_set_crdt", 00:04:02.289 "nvmf_set_config", 00:04:02.289 "nvmf_set_max_subsystems", 00:04:02.289 "nvmf_stop_mdns_prr", 00:04:02.289 "nvmf_publish_mdns_prr", 00:04:02.289 "nvmf_subsystem_get_listeners", 00:04:02.289 "nvmf_subsystem_get_qpairs", 00:04:02.289 "nvmf_subsystem_get_controllers", 00:04:02.289 "nvmf_get_stats", 00:04:02.289 "nvmf_get_transports", 00:04:02.289 "nvmf_create_transport", 00:04:02.289 "nvmf_get_targets", 00:04:02.289 "nvmf_delete_target", 00:04:02.289 "nvmf_create_target", 00:04:02.289 "nvmf_subsystem_allow_any_host", 00:04:02.289 "nvmf_subsystem_set_keys", 00:04:02.289 "nvmf_subsystem_remove_host", 00:04:02.289 "nvmf_subsystem_add_host", 00:04:02.289 "nvmf_ns_remove_host", 00:04:02.289 "nvmf_ns_add_host", 00:04:02.289 "nvmf_subsystem_remove_ns", 00:04:02.289 "nvmf_subsystem_set_ns_ana_group", 00:04:02.289 "nvmf_subsystem_add_ns", 00:04:02.289 "nvmf_subsystem_listener_set_ana_state", 00:04:02.289 "nvmf_discovery_get_referrals", 00:04:02.289 "nvmf_discovery_remove_referral", 00:04:02.289 "nvmf_discovery_add_referral", 00:04:02.289 "nvmf_subsystem_remove_listener", 00:04:02.289 "nvmf_subsystem_add_listener", 00:04:02.289 "nvmf_delete_subsystem", 00:04:02.289 "nvmf_create_subsystem", 00:04:02.289 "nvmf_get_subsystems", 00:04:02.289 "env_dpdk_get_mem_stats", 00:04:02.289 "nbd_get_disks", 00:04:02.289 "nbd_stop_disk", 00:04:02.289 "nbd_start_disk", 00:04:02.289 "ublk_recover_disk", 00:04:02.289 "ublk_get_disks", 00:04:02.289 "ublk_stop_disk", 00:04:02.289 "ublk_start_disk", 00:04:02.289 "ublk_destroy_target", 00:04:02.289 "ublk_create_target", 00:04:02.289 "virtio_blk_create_transport", 00:04:02.289 "virtio_blk_get_transports", 00:04:02.289 "vhost_controller_set_coalescing", 00:04:02.289 "vhost_get_controllers", 00:04:02.289 "vhost_delete_controller", 00:04:02.289 "vhost_create_blk_controller", 00:04:02.289 "vhost_scsi_controller_remove_target", 00:04:02.289 "vhost_scsi_controller_add_target", 00:04:02.289 "vhost_start_scsi_controller", 00:04:02.289 "vhost_create_scsi_controller", 00:04:02.289 "thread_set_cpumask", 00:04:02.289 "scheduler_set_options", 00:04:02.289 "framework_get_governor", 00:04:02.289 "framework_get_scheduler", 00:04:02.289 "framework_set_scheduler", 00:04:02.289 "framework_get_reactors", 00:04:02.289 "thread_get_io_channels", 00:04:02.289 "thread_get_pollers", 00:04:02.289 "thread_get_stats", 00:04:02.289 "framework_monitor_context_switch", 00:04:02.289 "spdk_kill_instance", 00:04:02.289 "log_enable_timestamps", 00:04:02.289 "log_get_flags", 00:04:02.289 "log_clear_flag", 00:04:02.289 "log_set_flag", 00:04:02.289 "log_get_level", 00:04:02.289 "log_set_level", 00:04:02.289 "log_get_print_level", 00:04:02.289 "log_set_print_level", 00:04:02.289 "framework_enable_cpumask_locks", 00:04:02.289 "framework_disable_cpumask_locks", 00:04:02.289 "framework_wait_init", 00:04:02.289 "framework_start_init", 00:04:02.289 "scsi_get_devices", 00:04:02.289 "bdev_get_histogram", 00:04:02.289 "bdev_enable_histogram", 00:04:02.289 "bdev_set_qos_limit", 00:04:02.289 "bdev_set_qd_sampling_period", 00:04:02.289 "bdev_get_bdevs", 00:04:02.289 "bdev_reset_iostat", 00:04:02.289 "bdev_get_iostat", 00:04:02.289 "bdev_examine", 00:04:02.289 "bdev_wait_for_examine", 00:04:02.289 "bdev_set_options", 00:04:02.289 "accel_get_stats", 00:04:02.289 "accel_set_options", 00:04:02.289 "accel_set_driver", 00:04:02.289 "accel_crypto_key_destroy", 00:04:02.289 "accel_crypto_keys_get", 00:04:02.289 "accel_crypto_key_create", 00:04:02.289 "accel_assign_opc", 00:04:02.289 "accel_get_module_info", 00:04:02.289 "accel_get_opc_assignments", 00:04:02.289 "vmd_rescan", 00:04:02.289 "vmd_remove_device", 00:04:02.289 "vmd_enable", 00:04:02.289 "sock_get_default_impl", 00:04:02.289 "sock_set_default_impl", 00:04:02.289 "sock_impl_set_options", 00:04:02.289 "sock_impl_get_options", 00:04:02.289 "iobuf_get_stats", 00:04:02.289 "iobuf_set_options", 00:04:02.289 "keyring_get_keys", 00:04:02.289 "vfu_tgt_set_base_path", 00:04:02.289 "framework_get_pci_devices", 00:04:02.289 "framework_get_config", 00:04:02.289 "framework_get_subsystems", 00:04:02.289 "fsdev_set_opts", 00:04:02.289 "fsdev_get_opts", 00:04:02.289 "trace_get_info", 00:04:02.289 "trace_get_tpoint_group_mask", 00:04:02.289 "trace_disable_tpoint_group", 00:04:02.289 "trace_enable_tpoint_group", 00:04:02.289 "trace_clear_tpoint_mask", 00:04:02.289 "trace_set_tpoint_mask", 00:04:02.289 "notify_get_notifications", 00:04:02.289 "notify_get_types", 00:04:02.289 "spdk_get_version", 00:04:02.289 "rpc_get_methods" 00:04:02.289 ] 00:04:02.548 10:21:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.548 10:21:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:02.548 10:21:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3294608 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3294608 ']' 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3294608 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294608 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294608' 00:04:02.548 killing process with pid 3294608 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3294608 00:04:02.548 10:21:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3294608 00:04:02.807 00:04:02.807 real 0m1.158s 00:04:02.807 user 0m1.937s 00:04:02.807 sys 0m0.457s 00:04:02.807 10:21:03 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.807 10:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.807 ************************************ 00:04:02.807 END TEST spdkcli_tcp 00:04:02.807 ************************************ 00:04:02.807 10:21:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.807 10:21:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.807 10:21:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.807 10:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.807 ************************************ 00:04:02.807 START TEST dpdk_mem_utility 00:04:02.807 ************************************ 00:04:02.807 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:03.066 * Looking for test storage... 00:04:03.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.066 10:21:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.066 --rc genhtml_branch_coverage=1 00:04:03.066 --rc genhtml_function_coverage=1 00:04:03.066 --rc genhtml_legend=1 00:04:03.066 --rc geninfo_all_blocks=1 00:04:03.066 --rc geninfo_unexecuted_blocks=1 00:04:03.066 00:04:03.066 ' 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.066 --rc genhtml_branch_coverage=1 00:04:03.066 --rc genhtml_function_coverage=1 00:04:03.066 --rc genhtml_legend=1 00:04:03.066 --rc geninfo_all_blocks=1 00:04:03.066 --rc geninfo_unexecuted_blocks=1 00:04:03.066 00:04:03.066 ' 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.066 --rc genhtml_branch_coverage=1 00:04:03.066 --rc genhtml_function_coverage=1 00:04:03.066 --rc genhtml_legend=1 00:04:03.066 --rc geninfo_all_blocks=1 00:04:03.066 --rc geninfo_unexecuted_blocks=1 00:04:03.066 00:04:03.066 ' 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.066 --rc genhtml_branch_coverage=1 00:04:03.066 --rc genhtml_function_coverage=1 00:04:03.066 --rc genhtml_legend=1 00:04:03.066 --rc geninfo_all_blocks=1 00:04:03.066 --rc geninfo_unexecuted_blocks=1 00:04:03.066 00:04:03.066 ' 00:04:03.066 10:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.066 10:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3294828 00:04:03.066 10:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.066 10:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3294828 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3294828 ']' 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.066 10:21:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.066 [2024-11-20 10:21:03.732515] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:03.066 [2024-11-20 10:21:03.732566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294828 ] 00:04:03.325 [2024-11-20 10:21:03.808995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.325 [2024-11-20 10:21:03.852011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.585 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.585 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:03.585 10:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.585 10:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.585 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.585 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.585 { 00:04:03.585 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.585 } 00:04:03.585 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.585 10:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.585 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:03.585 1 heaps totaling size 810.000000 MiB 00:04:03.585 size: 810.000000 MiB heap id: 0 00:04:03.585 end heaps---------- 00:04:03.585 9 mempools totaling size 595.772034 MiB 00:04:03.585 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.585 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.585 size: 92.545471 MiB name: bdev_io_3294828 00:04:03.585 size: 50.003479 MiB name: msgpool_3294828 00:04:03.585 size: 36.509338 MiB name: fsdev_io_3294828 00:04:03.585 size: 21.763794 MiB name: PDU_Pool 00:04:03.585 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.585 size: 4.133484 MiB name: evtpool_3294828 00:04:03.585 size: 0.026123 MiB name: Session_Pool 00:04:03.585 end mempools------- 00:04:03.585 6 memzones totaling size 4.142822 MiB 00:04:03.585 size: 1.000366 MiB name: RG_ring_0_3294828 00:04:03.585 size: 1.000366 MiB name: RG_ring_1_3294828 00:04:03.585 size: 1.000366 MiB name: RG_ring_4_3294828 00:04:03.585 size: 1.000366 MiB name: RG_ring_5_3294828 00:04:03.585 size: 0.125366 MiB name: RG_ring_2_3294828 00:04:03.585 size: 0.015991 MiB name: RG_ring_3_3294828 00:04:03.585 end memzones------- 00:04:03.585 10:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.585 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:03.585 list of free elements. size: 10.862488 MiB 00:04:03.585 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:03.585 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:03.585 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:03.585 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:03.585 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:03.585 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:03.585 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:03.585 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:03.585 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:03.585 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:03.585 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:03.585 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:03.585 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:03.585 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:03.585 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:03.585 list of standard malloc elements. size: 199.218628 MiB 00:04:03.585 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:03.585 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:03.585 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:03.585 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:03.585 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.585 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.585 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:03.585 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.585 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:03.585 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:03.585 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:03.585 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:03.585 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:03.585 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:03.585 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:03.585 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:03.585 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:03.585 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:03.585 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:03.585 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:03.585 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:03.585 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:03.586 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:03.586 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:03.586 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:03.586 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:03.586 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:03.586 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:03.586 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:03.586 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:03.586 list of memzone associated elements. size: 599.918884 MiB 00:04:03.586 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:03.586 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.586 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:03.586 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.586 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:03.586 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3294828_0 00:04:03.586 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:03.586 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3294828_0 00:04:03.586 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:03.586 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3294828_0 00:04:03.586 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:03.586 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.586 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:03.586 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.586 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:03.586 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3294828_0 00:04:03.586 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:03.586 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3294828 00:04:03.586 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.586 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3294828 00:04:03.586 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:03.586 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.586 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:03.586 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.586 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:03.586 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.586 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:03.586 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.586 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:03.586 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3294828 00:04:03.586 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:03.586 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3294828 00:04:03.586 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:03.586 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3294828 00:04:03.586 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:03.586 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3294828 00:04:03.586 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:03.586 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3294828 00:04:03.586 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:03.586 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3294828 00:04:03.586 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:03.586 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.586 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:03.586 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.586 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:03.586 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.586 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:03.586 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3294828 00:04:03.586 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:03.586 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3294828 00:04:03.586 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:03.586 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.586 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:03.586 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.586 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:03.586 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3294828 00:04:03.586 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:03.586 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.586 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:03.586 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3294828 00:04:03.586 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:03.586 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3294828 00:04:03.586 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:03.586 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3294828 00:04:03.586 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:03.586 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.586 10:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.586 10:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3294828 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3294828 ']' 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3294828 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294828 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294828' 00:04:03.586 killing process with pid 3294828 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3294828 00:04:03.586 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3294828 00:04:03.845 00:04:03.845 real 0m1.029s 00:04:03.845 user 0m0.949s 00:04:03.845 sys 0m0.431s 00:04:03.845 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.845 10:21:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.845 ************************************ 00:04:03.845 END TEST dpdk_mem_utility 00:04:03.845 ************************************ 00:04:03.846 10:21:04 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.846 10:21:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.846 10:21:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.846 10:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:04.105 ************************************ 00:04:04.105 START TEST event 00:04:04.105 ************************************ 00:04:04.105 10:21:04 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:04.105 * Looking for test storage... 00:04:04.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:04.105 10:21:04 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.105 10:21:04 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.105 10:21:04 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.105 10:21:04 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.105 10:21:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.105 10:21:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.105 10:21:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.105 10:21:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.105 10:21:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.105 10:21:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.105 10:21:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.105 10:21:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.105 10:21:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.105 10:21:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.105 10:21:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.105 10:21:04 event -- scripts/common.sh@344 -- # case "$op" in 00:04:04.105 10:21:04 event -- scripts/common.sh@345 -- # : 1 00:04:04.105 10:21:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.105 10:21:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.105 10:21:04 event -- scripts/common.sh@365 -- # decimal 1 00:04:04.105 10:21:04 event -- scripts/common.sh@353 -- # local d=1 00:04:04.105 10:21:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.105 10:21:04 event -- scripts/common.sh@355 -- # echo 1 00:04:04.105 10:21:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.105 10:21:04 event -- scripts/common.sh@366 -- # decimal 2 00:04:04.105 10:21:04 event -- scripts/common.sh@353 -- # local d=2 00:04:04.105 10:21:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.105 10:21:04 event -- scripts/common.sh@355 -- # echo 2 00:04:04.105 10:21:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.105 10:21:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.105 10:21:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.106 10:21:04 event -- scripts/common.sh@368 -- # return 0 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.106 --rc genhtml_branch_coverage=1 00:04:04.106 --rc genhtml_function_coverage=1 00:04:04.106 --rc genhtml_legend=1 00:04:04.106 --rc geninfo_all_blocks=1 00:04:04.106 --rc geninfo_unexecuted_blocks=1 00:04:04.106 00:04:04.106 ' 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.106 --rc genhtml_branch_coverage=1 00:04:04.106 --rc genhtml_function_coverage=1 00:04:04.106 --rc genhtml_legend=1 00:04:04.106 --rc geninfo_all_blocks=1 00:04:04.106 --rc geninfo_unexecuted_blocks=1 00:04:04.106 00:04:04.106 ' 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.106 --rc genhtml_branch_coverage=1 00:04:04.106 --rc genhtml_function_coverage=1 00:04:04.106 --rc genhtml_legend=1 00:04:04.106 --rc geninfo_all_blocks=1 00:04:04.106 --rc geninfo_unexecuted_blocks=1 00:04:04.106 00:04:04.106 ' 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.106 --rc genhtml_branch_coverage=1 00:04:04.106 --rc genhtml_function_coverage=1 00:04:04.106 --rc genhtml_legend=1 00:04:04.106 --rc geninfo_all_blocks=1 00:04:04.106 --rc geninfo_unexecuted_blocks=1 00:04:04.106 00:04:04.106 ' 00:04:04.106 10:21:04 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:04.106 10:21:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:04.106 10:21:04 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:04.106 10:21:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.106 10:21:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:04.106 ************************************ 00:04:04.106 START TEST event_perf 00:04:04.106 ************************************ 00:04:04.106 10:21:04 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.106 Running I/O for 1 seconds...[2024-11-20 10:21:04.831011] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:04.106 [2024-11-20 10:21:04.831079] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294989 ] 00:04:04.364 [2024-11-20 10:21:04.912657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:04.364 [2024-11-20 10:21:04.957937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.364 [2024-11-20 10:21:04.957976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:04.364 [2024-11-20 10:21:04.958045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.364 [2024-11-20 10:21:04.958046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.295 Running I/O for 1 seconds... 00:04:05.295 lcore 0: 203332 00:04:05.295 lcore 1: 203330 00:04:05.295 lcore 2: 203330 00:04:05.295 lcore 3: 203332 00:04:05.295 done. 00:04:05.295 00:04:05.295 real 0m1.190s 00:04:05.295 user 0m4.100s 00:04:05.295 sys 0m0.087s 00:04:05.295 10:21:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.295 10:21:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:05.295 ************************************ 00:04:05.295 END TEST event_perf 00:04:05.295 ************************************ 00:04:05.554 10:21:06 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.554 10:21:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:05.554 10:21:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.554 10:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.554 ************************************ 00:04:05.554 START TEST event_reactor 00:04:05.554 ************************************ 00:04:05.554 10:21:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.554 [2024-11-20 10:21:06.091230] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:05.554 [2024-11-20 10:21:06.091290] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295241 ] 00:04:05.554 [2024-11-20 10:21:06.168826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.554 [2024-11-20 10:21:06.210475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.655 test_start 00:04:06.655 oneshot 00:04:06.655 tick 100 00:04:06.655 tick 100 00:04:06.655 tick 250 00:04:06.655 tick 100 00:04:06.655 tick 100 00:04:06.655 tick 100 00:04:06.655 tick 250 00:04:06.655 tick 500 00:04:06.655 tick 100 00:04:06.655 tick 100 00:04:06.655 tick 250 00:04:06.655 tick 100 00:04:06.655 tick 100 00:04:06.655 test_end 00:04:06.655 00:04:06.655 real 0m1.180s 00:04:06.655 user 0m1.110s 00:04:06.655 sys 0m0.066s 00:04:06.655 10:21:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.655 10:21:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:06.655 ************************************ 00:04:06.655 END TEST event_reactor 00:04:06.655 ************************************ 00:04:06.655 10:21:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.655 10:21:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:06.655 10:21:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.655 10:21:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.004 ************************************ 00:04:07.004 START TEST event_reactor_perf 00:04:07.004 ************************************ 00:04:07.004 10:21:07 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.004 [2024-11-20 10:21:07.343535] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:07.004 [2024-11-20 10:21:07.343607] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295490 ] 00:04:07.004 [2024-11-20 10:21:07.421864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.004 [2024-11-20 10:21:07.465856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.940 test_start 00:04:07.940 test_end 00:04:07.940 Performance: 482709 events per second 00:04:07.940 00:04:07.940 real 0m1.182s 00:04:07.940 user 0m1.096s 00:04:07.940 sys 0m0.080s 00:04:07.940 10:21:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.940 10:21:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.940 ************************************ 00:04:07.940 END TEST event_reactor_perf 00:04:07.940 ************************************ 00:04:07.940 10:21:08 event -- event/event.sh@49 -- # uname -s 00:04:07.940 10:21:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:07.940 10:21:08 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.940 10:21:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.940 10:21:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.940 10:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.940 ************************************ 00:04:07.940 START TEST event_scheduler 00:04:07.940 ************************************ 00:04:07.940 10:21:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.940 * Looking for test storage... 00:04:07.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.198 10:21:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.198 --rc genhtml_branch_coverage=1 00:04:08.198 --rc genhtml_function_coverage=1 00:04:08.198 --rc genhtml_legend=1 00:04:08.198 --rc geninfo_all_blocks=1 00:04:08.198 --rc geninfo_unexecuted_blocks=1 00:04:08.198 00:04:08.198 ' 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.198 --rc genhtml_branch_coverage=1 00:04:08.198 --rc genhtml_function_coverage=1 00:04:08.198 --rc genhtml_legend=1 00:04:08.198 --rc geninfo_all_blocks=1 00:04:08.198 --rc geninfo_unexecuted_blocks=1 00:04:08.198 00:04:08.198 ' 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.198 --rc genhtml_branch_coverage=1 00:04:08.198 --rc genhtml_function_coverage=1 00:04:08.198 --rc genhtml_legend=1 00:04:08.198 --rc geninfo_all_blocks=1 00:04:08.198 --rc geninfo_unexecuted_blocks=1 00:04:08.198 00:04:08.198 ' 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.198 --rc genhtml_branch_coverage=1 00:04:08.198 --rc genhtml_function_coverage=1 00:04:08.198 --rc genhtml_legend=1 00:04:08.198 --rc geninfo_all_blocks=1 00:04:08.198 --rc geninfo_unexecuted_blocks=1 00:04:08.198 00:04:08.198 ' 00:04:08.198 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:08.198 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:08.198 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3295943 00:04:08.198 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.198 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3295943 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3295943 ']' 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.198 10:21:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.198 [2024-11-20 10:21:08.787340] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:08.198 [2024-11-20 10:21:08.787391] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295943 ] 00:04:08.198 [2024-11-20 10:21:08.865800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:08.198 [2024-11-20 10:21:08.910229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.198 [2024-11-20 10:21:08.910324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.198 [2024-11-20 10:21:08.910435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:08.198 [2024-11-20 10:21:08.910436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:08.458 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 [2024-11-20 10:21:08.962954] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:08.458 [2024-11-20 10:21:08.962973] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:08.458 [2024-11-20 10:21:08.962982] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:08.458 [2024-11-20 10:21:08.962988] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:08.458 [2024-11-20 10:21:08.962993] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 [2024-11-20 10:21:09.038200] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:08.458 10:21:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:08.458 10:21:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.458 10:21:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 ************************************ 00:04:08.458 START TEST scheduler_create_thread 00:04:08.458 ************************************ 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 2 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 3 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 4 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 5 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 6 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 7 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 8 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 9 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 10 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.458 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.459 10:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:08.459 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.459 10:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.361 10:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.361 10:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:10.361 10:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:10.361 10:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.361 10:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.296 10:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.296 00:04:11.296 real 0m2.621s 00:04:11.296 user 0m0.025s 00:04:11.296 sys 0m0.003s 00:04:11.296 10:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.296 10:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.296 ************************************ 00:04:11.296 END TEST scheduler_create_thread 00:04:11.296 ************************************ 00:04:11.296 10:21:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:11.297 10:21:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3295943 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3295943 ']' 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3295943 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3295943 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3295943' 00:04:11.297 killing process with pid 3295943 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3295943 00:04:11.297 10:21:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3295943 00:04:11.555 [2024-11-20 10:21:12.172512] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:11.813 00:04:11.813 real 0m3.761s 00:04:11.813 user 0m5.648s 00:04:11.813 sys 0m0.357s 00:04:11.813 10:21:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.813 10:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.813 ************************************ 00:04:11.813 END TEST event_scheduler 00:04:11.814 ************************************ 00:04:11.814 10:21:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:11.814 10:21:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:11.814 10:21:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.814 10:21:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.814 10:21:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.814 ************************************ 00:04:11.814 START TEST app_repeat 00:04:11.814 ************************************ 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3296904 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3296904' 00:04:11.814 Process app_repeat pid: 3296904 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:11.814 spdk_app_start Round 0 00:04:11.814 10:21:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3296904 /var/tmp/spdk-nbd.sock 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3296904 ']' 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.814 10:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:11.814 [2024-11-20 10:21:12.454106] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:11.814 [2024-11-20 10:21:12.454164] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296904 ] 00:04:11.814 [2024-11-20 10:21:12.529663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.072 [2024-11-20 10:21:12.572138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.072 [2024-11-20 10:21:12.572140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.072 10:21:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.072 10:21:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:12.072 10:21:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.330 Malloc0 00:04:12.330 10:21:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.330 Malloc1 00:04:12.587 10:21:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.587 10:21:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.587 /dev/nbd0 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.845 1+0 records in 00:04:12.845 1+0 records out 00:04:12.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024172 s, 16.9 MB/s 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.845 /dev/nbd1 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.845 10:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.845 10:21:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:13.102 10:21:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:13.102 10:21:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.103 1+0 records in 00:04:13.103 1+0 records out 00:04:13.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228315 s, 17.9 MB/s 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:13.103 10:21:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.103 { 00:04:13.103 "nbd_device": "/dev/nbd0", 00:04:13.103 "bdev_name": "Malloc0" 00:04:13.103 }, 00:04:13.103 { 00:04:13.103 "nbd_device": "/dev/nbd1", 00:04:13.103 "bdev_name": "Malloc1" 00:04:13.103 } 00:04:13.103 ]' 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.103 { 00:04:13.103 "nbd_device": "/dev/nbd0", 00:04:13.103 "bdev_name": "Malloc0" 00:04:13.103 }, 00:04:13.103 { 00:04:13.103 "nbd_device": "/dev/nbd1", 00:04:13.103 "bdev_name": "Malloc1" 00:04:13.103 } 00:04:13.103 ]' 00:04:13.103 10:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.361 /dev/nbd1' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.361 /dev/nbd1' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.361 256+0 records in 00:04:13.361 256+0 records out 00:04:13.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106027 s, 98.9 MB/s 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.361 256+0 records in 00:04:13.361 256+0 records out 00:04:13.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143491 s, 73.1 MB/s 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.361 256+0 records in 00:04:13.361 256+0 records out 00:04:13.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148253 s, 70.7 MB/s 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.361 10:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.619 10:21:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.877 10:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.134 10:21:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.134 10:21:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.134 10:21:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.392 [2024-11-20 10:21:14.993170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.392 [2024-11-20 10:21:15.030043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.392 [2024-11-20 10:21:15.030044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.392 [2024-11-20 10:21:15.071087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.392 [2024-11-20 10:21:15.071132] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.672 10:21:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.672 10:21:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:17.672 spdk_app_start Round 1 00:04:17.672 10:21:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3296904 /var/tmp/spdk-nbd.sock 00:04:17.672 10:21:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3296904 ']' 00:04:17.672 10:21:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.672 10:21:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.672 10:21:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.673 10:21:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.673 10:21:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.673 10:21:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.673 10:21:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:17.673 10:21:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.673 Malloc0 00:04:17.673 10:21:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.930 Malloc1 00:04:17.930 10:21:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.930 10:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:17.930 /dev/nbd0 00:04:18.188 10:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.188 10:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.188 1+0 records in 00:04:18.188 1+0 records out 00:04:18.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235025 s, 17.4 MB/s 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.188 10:21:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.188 10:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.188 10:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.188 10:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.188 /dev/nbd1 00:04:18.188 10:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.446 10:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.446 1+0 records in 00:04:18.446 1+0 records out 00:04:18.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202245 s, 20.3 MB/s 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.446 10:21:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.446 10:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.446 10:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.446 10:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.446 10:21:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.446 10:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.446 10:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.446 { 00:04:18.447 "nbd_device": "/dev/nbd0", 00:04:18.447 "bdev_name": "Malloc0" 00:04:18.447 }, 00:04:18.447 { 00:04:18.447 "nbd_device": "/dev/nbd1", 00:04:18.447 "bdev_name": "Malloc1" 00:04:18.447 } 00:04:18.447 ]' 00:04:18.447 10:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.447 { 00:04:18.447 "nbd_device": "/dev/nbd0", 00:04:18.447 "bdev_name": "Malloc0" 00:04:18.447 }, 00:04:18.447 { 00:04:18.447 "nbd_device": "/dev/nbd1", 00:04:18.447 "bdev_name": "Malloc1" 00:04:18.447 } 00:04:18.447 ]' 00:04:18.447 10:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.704 /dev/nbd1' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.704 /dev/nbd1' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.704 256+0 records in 00:04:18.704 256+0 records out 00:04:18.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996876 s, 105 MB/s 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.704 256+0 records in 00:04:18.704 256+0 records out 00:04:18.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142185 s, 73.7 MB/s 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.704 256+0 records in 00:04:18.704 256+0 records out 00:04:18.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014873 s, 70.5 MB/s 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.704 10:21:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.962 10:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.220 10:21:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.220 10:21:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.478 10:21:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.736 [2024-11-20 10:21:20.302321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.736 [2024-11-20 10:21:20.340256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.736 [2024-11-20 10:21:20.340257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.736 [2024-11-20 10:21:20.382339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.736 [2024-11-20 10:21:20.382379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.016 10:21:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.016 10:21:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.016 spdk_app_start Round 2 00:04:23.016 10:21:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3296904 /var/tmp/spdk-nbd.sock 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3296904 ']' 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.016 10:21:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:23.016 10:21:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.016 Malloc0 00:04:23.017 10:21:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.275 Malloc1 00:04:23.275 10:21:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.275 /dev/nbd0 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.275 10:21:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.275 10:21:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.275 10:21:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.275 10:21:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.275 10:21:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.275 10:21:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.275 10:21:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.275 10:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.533 1+0 records in 00:04:23.533 1+0 records out 00:04:23.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186872 s, 21.9 MB/s 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.533 10:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.533 10:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.533 10:21:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.533 /dev/nbd1 00:04:23.533 10:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.533 10:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.533 1+0 records in 00:04:23.533 1+0 records out 00:04:23.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236806 s, 17.3 MB/s 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.533 10:21:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.792 10:21:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.792 10:21:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.792 { 00:04:23.792 "nbd_device": "/dev/nbd0", 00:04:23.792 "bdev_name": "Malloc0" 00:04:23.792 }, 00:04:23.792 { 00:04:23.792 "nbd_device": "/dev/nbd1", 00:04:23.792 "bdev_name": "Malloc1" 00:04:23.792 } 00:04:23.792 ]' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.792 { 00:04:23.792 "nbd_device": "/dev/nbd0", 00:04:23.792 "bdev_name": "Malloc0" 00:04:23.792 }, 00:04:23.792 { 00:04:23.792 "nbd_device": "/dev/nbd1", 00:04:23.792 "bdev_name": "Malloc1" 00:04:23.792 } 00:04:23.792 ]' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.792 /dev/nbd1' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.792 /dev/nbd1' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.792 10:21:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.049 256+0 records in 00:04:24.049 256+0 records out 00:04:24.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998677 s, 105 MB/s 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.049 256+0 records in 00:04:24.049 256+0 records out 00:04:24.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013871 s, 75.6 MB/s 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.049 256+0 records in 00:04:24.049 256+0 records out 00:04:24.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150464 s, 69.7 MB/s 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.049 10:21:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.306 10:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.306 10:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.306 10:21:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.306 10:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.307 10:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.307 10:21:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.307 10:21:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.307 10:21:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.307 10:21:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.307 10:21:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.307 10:21:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.564 10:21:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.564 10:21:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.821 10:21:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.078 [2024-11-20 10:21:25.636428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.078 [2024-11-20 10:21:25.673701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.078 [2024-11-20 10:21:25.673702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.078 [2024-11-20 10:21:25.714978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.078 [2024-11-20 10:21:25.715021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.357 10:21:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3296904 /var/tmp/spdk-nbd.sock 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3296904 ']' 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.357 10:21:28 event.app_repeat -- event/event.sh@39 -- # killprocess 3296904 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3296904 ']' 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3296904 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3296904 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3296904' 00:04:28.357 killing process with pid 3296904 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3296904 00:04:28.357 10:21:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3296904 00:04:28.357 spdk_app_start is called in Round 0. 00:04:28.357 Shutdown signal received, stop current app iteration 00:04:28.357 Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 reinitialization... 00:04:28.357 spdk_app_start is called in Round 1. 00:04:28.357 Shutdown signal received, stop current app iteration 00:04:28.358 Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 reinitialization... 00:04:28.358 spdk_app_start is called in Round 2. 00:04:28.358 Shutdown signal received, stop current app iteration 00:04:28.358 Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 reinitialization... 00:04:28.358 spdk_app_start is called in Round 3. 00:04:28.358 Shutdown signal received, stop current app iteration 00:04:28.358 10:21:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.358 10:21:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:28.358 00:04:28.358 real 0m16.466s 00:04:28.358 user 0m36.276s 00:04:28.358 sys 0m2.549s 00:04:28.358 10:21:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.358 10:21:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.358 ************************************ 00:04:28.358 END TEST app_repeat 00:04:28.358 ************************************ 00:04:28.358 10:21:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.358 10:21:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.358 10:21:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.358 10:21:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.358 10:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.358 ************************************ 00:04:28.358 START TEST cpu_locks 00:04:28.358 ************************************ 00:04:28.358 10:21:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.358 * Looking for test storage... 00:04:28.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.358 10:21:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.358 10:21:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.358 10:21:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.629 10:21:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.629 --rc genhtml_branch_coverage=1 00:04:28.629 --rc genhtml_function_coverage=1 00:04:28.629 --rc genhtml_legend=1 00:04:28.629 --rc geninfo_all_blocks=1 00:04:28.629 --rc geninfo_unexecuted_blocks=1 00:04:28.629 00:04:28.629 ' 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.629 --rc genhtml_branch_coverage=1 00:04:28.629 --rc genhtml_function_coverage=1 00:04:28.629 --rc genhtml_legend=1 00:04:28.629 --rc geninfo_all_blocks=1 00:04:28.629 --rc geninfo_unexecuted_blocks=1 00:04:28.629 00:04:28.629 ' 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.629 --rc genhtml_branch_coverage=1 00:04:28.629 --rc genhtml_function_coverage=1 00:04:28.629 --rc genhtml_legend=1 00:04:28.629 --rc geninfo_all_blocks=1 00:04:28.629 --rc geninfo_unexecuted_blocks=1 00:04:28.629 00:04:28.629 ' 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.629 --rc genhtml_branch_coverage=1 00:04:28.629 --rc genhtml_function_coverage=1 00:04:28.629 --rc genhtml_legend=1 00:04:28.629 --rc geninfo_all_blocks=1 00:04:28.629 --rc geninfo_unexecuted_blocks=1 00:04:28.629 00:04:28.629 ' 00:04:28.629 10:21:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.629 10:21:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.629 10:21:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.629 10:21:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.629 10:21:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.629 ************************************ 00:04:28.629 START TEST default_locks 00:04:28.629 ************************************ 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3299910 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3299910 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3299910 ']' 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.629 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.629 [2024-11-20 10:21:29.208566] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:28.629 [2024-11-20 10:21:29.208612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299910 ] 00:04:28.629 [2024-11-20 10:21:29.280720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.629 [2024-11-20 10:21:29.320797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.888 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.888 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:28.888 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3299910 00:04:28.888 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3299910 00:04:28.888 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.147 lslocks: write error 00:04:29.147 10:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3299910 00:04:29.147 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3299910 ']' 00:04:29.147 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3299910 00:04:29.147 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:29.147 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.147 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3299910 00:04:29.406 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.406 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.406 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3299910' 00:04:29.406 killing process with pid 3299910 00:04:29.406 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3299910 00:04:29.406 10:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3299910 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3299910 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3299910 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:29.665 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3299910 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3299910 ']' 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3299910) - No such process 00:04:29.666 ERROR: process (pid: 3299910) is no longer running 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.666 00:04:29.666 real 0m1.073s 00:04:29.666 user 0m1.033s 00:04:29.666 sys 0m0.486s 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.666 10:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.666 ************************************ 00:04:29.666 END TEST default_locks 00:04:29.666 ************************************ 00:04:29.666 10:21:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.666 10:21:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.666 10:21:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.666 10:21:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.666 ************************************ 00:04:29.666 START TEST default_locks_via_rpc 00:04:29.666 ************************************ 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3300167 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3300167 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3300167 ']' 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.666 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.666 [2024-11-20 10:21:30.347907] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:29.666 [2024-11-20 10:21:30.347961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300167 ] 00:04:29.925 [2024-11-20 10:21:30.423892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.925 [2024-11-20 10:21:30.463659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.183 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.183 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.183 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3300167 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3300167 00:04:30.184 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.442 10:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3300167 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3300167 ']' 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3300167 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300167 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300167' 00:04:30.443 killing process with pid 3300167 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3300167 00:04:30.443 10:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3300167 00:04:30.702 00:04:30.702 real 0m0.999s 00:04:30.702 user 0m0.967s 00:04:30.702 sys 0m0.445s 00:04:30.702 10:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.702 10:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.702 ************************************ 00:04:30.702 END TEST default_locks_via_rpc 00:04:30.702 ************************************ 00:04:30.702 10:21:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:30.702 10:21:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.702 10:21:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.702 10:21:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.702 ************************************ 00:04:30.702 START TEST non_locking_app_on_locked_coremask 00:04:30.702 ************************************ 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3300421 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3300421 /var/tmp/spdk.sock 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3300421 ']' 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.702 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.702 [2024-11-20 10:21:31.406991] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:30.702 [2024-11-20 10:21:31.407029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300421 ] 00:04:30.960 [2024-11-20 10:21:31.464482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.960 [2024-11-20 10:21:31.507988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3300424 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3300424 /var/tmp/spdk2.sock 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3300424 ']' 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.219 10:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.219 [2024-11-20 10:21:31.774569] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:31.219 [2024-11-20 10:21:31.774617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300424 ] 00:04:31.219 [2024-11-20 10:21:31.866203] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.219 [2024-11-20 10:21:31.866224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.219 [2024-11-20 10:21:31.947529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.155 10:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.155 10:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.155 10:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3300421 00:04:32.155 10:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3300421 00:04:32.155 10:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.722 lslocks: write error 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3300421 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3300421 ']' 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3300421 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300421 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300421' 00:04:32.722 killing process with pid 3300421 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3300421 00:04:32.722 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3300421 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3300424 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3300424 ']' 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3300424 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300424 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300424' 00:04:33.290 killing process with pid 3300424 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3300424 00:04:33.290 10:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3300424 00:04:33.549 00:04:33.549 real 0m2.786s 00:04:33.549 user 0m2.975s 00:04:33.549 sys 0m0.923s 00:04:33.549 10:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.549 10:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.549 ************************************ 00:04:33.549 END TEST non_locking_app_on_locked_coremask 00:04:33.549 ************************************ 00:04:33.549 10:21:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:33.549 10:21:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.549 10:21:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.549 10:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.549 ************************************ 00:04:33.549 START TEST locking_app_on_unlocked_coremask 00:04:33.549 ************************************ 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3300918 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3300918 /var/tmp/spdk.sock 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3300918 ']' 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.549 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.549 [2024-11-20 10:21:34.270489] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:33.549 [2024-11-20 10:21:34.270533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300918 ] 00:04:33.808 [2024-11-20 10:21:34.345467] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.808 [2024-11-20 10:21:34.345492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.808 [2024-11-20 10:21:34.383228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3300928 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3300928 /var/tmp/spdk2.sock 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3300928 ']' 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.068 10:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.068 [2024-11-20 10:21:34.656447] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:34.068 [2024-11-20 10:21:34.656498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300928 ] 00:04:34.068 [2024-11-20 10:21:34.749339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.327 [2024-11-20 10:21:34.830569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.895 10:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.895 10:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:34.895 10:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3300928 00:04:34.895 10:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3300928 00:04:34.895 10:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.463 lslocks: write error 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3300918 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3300918 ']' 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3300918 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300918 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300918' 00:04:35.463 killing process with pid 3300918 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3300918 00:04:35.463 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3300918 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3300928 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3300928 ']' 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3300928 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300928 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300928' 00:04:36.032 killing process with pid 3300928 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3300928 00:04:36.032 10:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3300928 00:04:36.292 00:04:36.292 real 0m2.796s 00:04:36.292 user 0m2.956s 00:04:36.292 sys 0m0.908s 00:04:36.292 10:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.292 10:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.292 ************************************ 00:04:36.292 END TEST locking_app_on_unlocked_coremask 00:04:36.292 ************************************ 00:04:36.552 10:21:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:36.552 10:21:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.552 10:21:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.552 10:21:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.552 ************************************ 00:04:36.552 START TEST locking_app_on_locked_coremask 00:04:36.552 ************************************ 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3301417 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3301417 /var/tmp/spdk.sock 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3301417 ']' 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.552 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.552 [2024-11-20 10:21:37.137414] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:36.552 [2024-11-20 10:21:37.137462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301417 ] 00:04:36.552 [2024-11-20 10:21:37.209499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.553 [2024-11-20 10:21:37.247975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3301426 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3301426 /var/tmp/spdk2.sock 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3301426 /var/tmp/spdk2.sock 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3301426 /var/tmp/spdk2.sock 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3301426 ']' 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.812 10:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.812 [2024-11-20 10:21:37.518863] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:36.812 [2024-11-20 10:21:37.518907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301426 ] 00:04:37.071 [2024-11-20 10:21:37.607527] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3301417 has claimed it. 00:04:37.071 [2024-11-20 10:21:37.607568] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:37.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3301426) - No such process 00:04:37.637 ERROR: process (pid: 3301426) is no longer running 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3301417 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3301417 00:04:37.637 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.204 lslocks: write error 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3301417 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3301417 ']' 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3301417 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3301417 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3301417' 00:04:38.204 killing process with pid 3301417 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3301417 00:04:38.204 10:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3301417 00:04:38.463 00:04:38.463 real 0m2.025s 00:04:38.463 user 0m2.172s 00:04:38.463 sys 0m0.670s 00:04:38.463 10:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.463 10:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.463 ************************************ 00:04:38.463 END TEST locking_app_on_locked_coremask 00:04:38.463 ************************************ 00:04:38.463 10:21:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:38.463 10:21:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.463 10:21:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.463 10:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.463 ************************************ 00:04:38.463 START TEST locking_overlapped_coremask 00:04:38.463 ************************************ 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3301710 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3301710 /var/tmp/spdk.sock 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3301710 ']' 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.463 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.721 [2024-11-20 10:21:39.232551] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:38.721 [2024-11-20 10:21:39.232597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301710 ] 00:04:38.721 [2024-11-20 10:21:39.308356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:38.721 [2024-11-20 10:21:39.353241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.721 [2024-11-20 10:21:39.353352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.721 [2024-11-20 10:21:39.353352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3301914 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3301914 /var/tmp/spdk2.sock 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3301914 /var/tmp/spdk2.sock 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3301914 /var/tmp/spdk2.sock 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3301914 ']' 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.979 10:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.979 [2024-11-20 10:21:39.615998] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:38.979 [2024-11-20 10:21:39.616046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301914 ] 00:04:39.237 [2024-11-20 10:21:39.709876] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3301710 has claimed it. 00:04:39.237 [2024-11-20 10:21:39.709915] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3301914) - No such process 00:04:39.804 ERROR: process (pid: 3301914) is no longer running 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3301710 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3301710 ']' 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3301710 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3301710 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3301710' 00:04:39.804 killing process with pid 3301710 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3301710 00:04:39.804 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3301710 00:04:40.064 00:04:40.064 real 0m1.431s 00:04:40.064 user 0m3.911s 00:04:40.064 sys 0m0.409s 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.064 ************************************ 00:04:40.064 END TEST locking_overlapped_coremask 00:04:40.064 ************************************ 00:04:40.064 10:21:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:40.064 10:21:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.064 10:21:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.064 10:21:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.064 ************************************ 00:04:40.064 START TEST locking_overlapped_coremask_via_rpc 00:04:40.064 ************************************ 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3302059 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3302059 /var/tmp/spdk.sock 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3302059 ']' 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.064 10:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.064 [2024-11-20 10:21:40.735508] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:40.064 [2024-11-20 10:21:40.735554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302059 ] 00:04:40.323 [2024-11-20 10:21:40.810389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.323 [2024-11-20 10:21:40.810415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.323 [2024-11-20 10:21:40.855852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.323 [2024-11-20 10:21:40.856004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.323 [2024-11-20 10:21:40.856004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3302177 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3302177 /var/tmp/spdk2.sock 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3302177 ']' 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.581 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.581 [2024-11-20 10:21:41.123473] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:40.581 [2024-11-20 10:21:41.123517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302177 ] 00:04:40.581 [2024-11-20 10:21:41.216979] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.581 [2024-11-20 10:21:41.217006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.581 [2024-11-20 10:21:41.305032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.581 [2024-11-20 10:21:41.305143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.581 [2024-11-20 10:21:41.305145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.515 [2024-11-20 10:21:41.979020] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3302059 has claimed it. 00:04:41.515 request: 00:04:41.515 { 00:04:41.515 "method": "framework_enable_cpumask_locks", 00:04:41.515 "req_id": 1 00:04:41.515 } 00:04:41.515 Got JSON-RPC error response 00:04:41.515 response: 00:04:41.515 { 00:04:41.515 "code": -32603, 00:04:41.515 "message": "Failed to claim CPU core: 2" 00:04:41.515 } 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3302059 /var/tmp/spdk.sock 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3302059 ']' 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.515 10:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3302177 /var/tmp/spdk2.sock 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3302177 ']' 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.515 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.773 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.774 00:04:41.774 real 0m1.723s 00:04:41.774 user 0m0.822s 00:04:41.774 sys 0m0.144s 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.774 10:21:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.774 ************************************ 00:04:41.774 END TEST locking_overlapped_coremask_via_rpc 00:04:41.774 ************************************ 00:04:41.774 10:21:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:41.774 10:21:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3302059 ]] 00:04:41.774 10:21:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3302059 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3302059 ']' 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3302059 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3302059 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3302059' 00:04:41.774 killing process with pid 3302059 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3302059 00:04:41.774 10:21:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3302059 00:04:42.340 10:21:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3302177 ]] 00:04:42.340 10:21:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3302177 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3302177 ']' 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3302177 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3302177 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3302177' 00:04:42.340 killing process with pid 3302177 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3302177 00:04:42.340 10:21:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3302177 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3302059 ]] 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3302059 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3302059 ']' 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3302059 00:04:42.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3302059) - No such process 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3302059 is not found' 00:04:42.599 Process with pid 3302059 is not found 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3302177 ]] 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3302177 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3302177 ']' 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3302177 00:04:42.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3302177) - No such process 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3302177 is not found' 00:04:42.599 Process with pid 3302177 is not found 00:04:42.599 10:21:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.599 00:04:42.599 real 0m14.215s 00:04:42.599 user 0m24.624s 00:04:42.599 sys 0m4.949s 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.599 10:21:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.599 ************************************ 00:04:42.599 END TEST cpu_locks 00:04:42.599 ************************************ 00:04:42.599 00:04:42.599 real 0m38.611s 00:04:42.599 user 1m13.117s 00:04:42.599 sys 0m8.481s 00:04:42.599 10:21:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.599 10:21:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.599 ************************************ 00:04:42.599 END TEST event 00:04:42.599 ************************************ 00:04:42.599 10:21:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.599 10:21:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.599 10:21:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.599 10:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.599 ************************************ 00:04:42.599 START TEST thread 00:04:42.599 ************************************ 00:04:42.600 10:21:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.858 * Looking for test storage... 00:04:42.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:42.858 10:21:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.858 10:21:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.859 10:21:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.859 10:21:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.859 10:21:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.859 10:21:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.859 10:21:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.859 10:21:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.859 10:21:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.859 10:21:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.859 10:21:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.859 10:21:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.859 10:21:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.859 10:21:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:42.859 10:21:43 thread -- scripts/common.sh@345 -- # : 1 00:04:42.859 10:21:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.859 10:21:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.859 10:21:43 thread -- scripts/common.sh@365 -- # decimal 1 00:04:42.859 10:21:43 thread -- scripts/common.sh@353 -- # local d=1 00:04:42.859 10:21:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.859 10:21:43 thread -- scripts/common.sh@355 -- # echo 1 00:04:42.859 10:21:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.859 10:21:43 thread -- scripts/common.sh@366 -- # decimal 2 00:04:42.859 10:21:43 thread -- scripts/common.sh@353 -- # local d=2 00:04:42.859 10:21:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.859 10:21:43 thread -- scripts/common.sh@355 -- # echo 2 00:04:42.859 10:21:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.859 10:21:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.859 10:21:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.859 10:21:43 thread -- scripts/common.sh@368 -- # return 0 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.859 --rc genhtml_branch_coverage=1 00:04:42.859 --rc genhtml_function_coverage=1 00:04:42.859 --rc genhtml_legend=1 00:04:42.859 --rc geninfo_all_blocks=1 00:04:42.859 --rc geninfo_unexecuted_blocks=1 00:04:42.859 00:04:42.859 ' 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.859 --rc genhtml_branch_coverage=1 00:04:42.859 --rc genhtml_function_coverage=1 00:04:42.859 --rc genhtml_legend=1 00:04:42.859 --rc geninfo_all_blocks=1 00:04:42.859 --rc geninfo_unexecuted_blocks=1 00:04:42.859 00:04:42.859 ' 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.859 --rc genhtml_branch_coverage=1 00:04:42.859 --rc genhtml_function_coverage=1 00:04:42.859 --rc genhtml_legend=1 00:04:42.859 --rc geninfo_all_blocks=1 00:04:42.859 --rc geninfo_unexecuted_blocks=1 00:04:42.859 00:04:42.859 ' 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.859 --rc genhtml_branch_coverage=1 00:04:42.859 --rc genhtml_function_coverage=1 00:04:42.859 --rc genhtml_legend=1 00:04:42.859 --rc geninfo_all_blocks=1 00:04:42.859 --rc geninfo_unexecuted_blocks=1 00:04:42.859 00:04:42.859 ' 00:04:42.859 10:21:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.859 10:21:43 thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.859 ************************************ 00:04:42.859 START TEST thread_poller_perf 00:04:42.859 ************************************ 00:04:42.859 10:21:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.859 [2024-11-20 10:21:43.514108] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:42.859 [2024-11-20 10:21:43.514164] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302657 ] 00:04:43.118 [2024-11-20 10:21:43.595115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.118 [2024-11-20 10:21:43.636616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.118 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:44.055 [2024-11-20T09:21:44.786Z] ====================================== 00:04:44.055 [2024-11-20T09:21:44.786Z] busy:2307590770 (cyc) 00:04:44.055 [2024-11-20T09:21:44.786Z] total_run_count: 412000 00:04:44.055 [2024-11-20T09:21:44.786Z] tsc_hz: 2300000000 (cyc) 00:04:44.055 [2024-11-20T09:21:44.786Z] ====================================== 00:04:44.055 [2024-11-20T09:21:44.786Z] poller_cost: 5600 (cyc), 2434 (nsec) 00:04:44.055 00:04:44.055 real 0m1.188s 00:04:44.055 user 0m1.108s 00:04:44.055 sys 0m0.076s 00:04:44.055 10:21:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.055 10:21:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.055 ************************************ 00:04:44.055 END TEST thread_poller_perf 00:04:44.055 ************************************ 00:04:44.055 10:21:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:44.055 10:21:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:44.055 10:21:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.055 10:21:44 thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.055 ************************************ 00:04:44.055 START TEST thread_poller_perf 00:04:44.055 ************************************ 00:04:44.055 10:21:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:44.056 [2024-11-20 10:21:44.777356] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:44.056 [2024-11-20 10:21:44.777425] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302834 ] 00:04:44.314 [2024-11-20 10:21:44.858426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.314 [2024-11-20 10:21:44.900747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.314 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:45.250 [2024-11-20T09:21:45.981Z] ====================================== 00:04:45.250 [2024-11-20T09:21:45.981Z] busy:2301499654 (cyc) 00:04:45.250 [2024-11-20T09:21:45.981Z] total_run_count: 5419000 00:04:45.250 [2024-11-20T09:21:45.981Z] tsc_hz: 2300000000 (cyc) 00:04:45.250 [2024-11-20T09:21:45.981Z] ====================================== 00:04:45.250 [2024-11-20T09:21:45.981Z] poller_cost: 424 (cyc), 184 (nsec) 00:04:45.250 00:04:45.250 real 0m1.184s 00:04:45.250 user 0m1.106s 00:04:45.250 sys 0m0.075s 00:04:45.250 10:21:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.250 10:21:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.250 ************************************ 00:04:45.250 END TEST thread_poller_perf 00:04:45.250 ************************************ 00:04:45.250 10:21:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:45.250 00:04:45.250 real 0m2.697s 00:04:45.250 user 0m2.377s 00:04:45.250 sys 0m0.336s 00:04:45.250 10:21:45 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.250 10:21:45 thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.250 ************************************ 00:04:45.250 END TEST thread 00:04:45.250 ************************************ 00:04:45.510 10:21:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:45.510 10:21:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:45.510 10:21:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.510 10:21:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.510 10:21:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 ************************************ 00:04:45.510 START TEST app_cmdline 00:04:45.510 ************************************ 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:45.510 * Looking for test storage... 00:04:45.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.510 10:21:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.510 --rc genhtml_branch_coverage=1 00:04:45.510 --rc genhtml_function_coverage=1 00:04:45.510 --rc genhtml_legend=1 00:04:45.510 --rc geninfo_all_blocks=1 00:04:45.510 --rc geninfo_unexecuted_blocks=1 00:04:45.510 00:04:45.510 ' 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.510 --rc genhtml_branch_coverage=1 00:04:45.510 --rc genhtml_function_coverage=1 00:04:45.510 --rc genhtml_legend=1 00:04:45.510 --rc geninfo_all_blocks=1 00:04:45.510 --rc geninfo_unexecuted_blocks=1 00:04:45.510 00:04:45.510 ' 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.510 --rc genhtml_branch_coverage=1 00:04:45.510 --rc genhtml_function_coverage=1 00:04:45.510 --rc genhtml_legend=1 00:04:45.510 --rc geninfo_all_blocks=1 00:04:45.510 --rc geninfo_unexecuted_blocks=1 00:04:45.510 00:04:45.510 ' 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.510 --rc genhtml_branch_coverage=1 00:04:45.510 --rc genhtml_function_coverage=1 00:04:45.510 --rc genhtml_legend=1 00:04:45.510 --rc geninfo_all_blocks=1 00:04:45.510 --rc geninfo_unexecuted_blocks=1 00:04:45.510 00:04:45.510 ' 00:04:45.510 10:21:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:45.510 10:21:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3303185 00:04:45.510 10:21:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3303185 00:04:45.510 10:21:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3303185 ']' 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.510 10:21:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:45.770 [2024-11-20 10:21:46.283654] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:45.770 [2024-11-20 10:21:46.283707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303185 ] 00:04:45.770 [2024-11-20 10:21:46.358467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.770 [2024-11-20 10:21:46.402534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.029 10:21:46 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.029 10:21:46 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:46.029 10:21:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:46.288 { 00:04:46.288 "version": "SPDK v25.01-pre git sha1 876509865", 00:04:46.288 "fields": { 00:04:46.288 "major": 25, 00:04:46.288 "minor": 1, 00:04:46.288 "patch": 0, 00:04:46.288 "suffix": "-pre", 00:04:46.288 "commit": "876509865" 00:04:46.288 } 00:04:46.288 } 00:04:46.288 10:21:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:46.288 10:21:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:46.288 10:21:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:46.288 10:21:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:46.288 10:21:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:46.288 10:21:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.289 10:21:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.289 10:21:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:46.289 10:21:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:46.289 10:21:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:46.289 10:21:46 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.548 request: 00:04:46.548 { 00:04:46.548 "method": "env_dpdk_get_mem_stats", 00:04:46.548 "req_id": 1 00:04:46.548 } 00:04:46.548 Got JSON-RPC error response 00:04:46.548 response: 00:04:46.548 { 00:04:46.548 "code": -32601, 00:04:46.548 "message": "Method not found" 00:04:46.548 } 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.548 10:21:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3303185 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3303185 ']' 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3303185 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3303185 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3303185' 00:04:46.548 killing process with pid 3303185 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 3303185 00:04:46.548 10:21:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 3303185 00:04:46.807 00:04:46.807 real 0m1.367s 00:04:46.807 user 0m1.580s 00:04:46.807 sys 0m0.467s 00:04:46.807 10:21:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.807 10:21:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.807 ************************************ 00:04:46.807 END TEST app_cmdline 00:04:46.807 ************************************ 00:04:46.807 10:21:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.807 10:21:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.807 10:21:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.807 10:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:46.807 ************************************ 00:04:46.807 START TEST version 00:04:46.807 ************************************ 00:04:46.807 10:21:47 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:47.067 * Looking for test storage... 00:04:47.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.067 10:21:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.067 10:21:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.067 10:21:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.067 10:21:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.067 10:21:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.067 10:21:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.067 10:21:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.067 10:21:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.067 10:21:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.067 10:21:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.067 10:21:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.067 10:21:47 version -- scripts/common.sh@344 -- # case "$op" in 00:04:47.067 10:21:47 version -- scripts/common.sh@345 -- # : 1 00:04:47.067 10:21:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.067 10:21:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.067 10:21:47 version -- scripts/common.sh@365 -- # decimal 1 00:04:47.067 10:21:47 version -- scripts/common.sh@353 -- # local d=1 00:04:47.067 10:21:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.067 10:21:47 version -- scripts/common.sh@355 -- # echo 1 00:04:47.067 10:21:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.067 10:21:47 version -- scripts/common.sh@366 -- # decimal 2 00:04:47.067 10:21:47 version -- scripts/common.sh@353 -- # local d=2 00:04:47.067 10:21:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.067 10:21:47 version -- scripts/common.sh@355 -- # echo 2 00:04:47.067 10:21:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.067 10:21:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.067 10:21:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.067 10:21:47 version -- scripts/common.sh@368 -- # return 0 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.067 --rc genhtml_branch_coverage=1 00:04:47.067 --rc genhtml_function_coverage=1 00:04:47.067 --rc genhtml_legend=1 00:04:47.067 --rc geninfo_all_blocks=1 00:04:47.067 --rc geninfo_unexecuted_blocks=1 00:04:47.067 00:04:47.067 ' 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.067 --rc genhtml_branch_coverage=1 00:04:47.067 --rc genhtml_function_coverage=1 00:04:47.067 --rc genhtml_legend=1 00:04:47.067 --rc geninfo_all_blocks=1 00:04:47.067 --rc geninfo_unexecuted_blocks=1 00:04:47.067 00:04:47.067 ' 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.067 --rc genhtml_branch_coverage=1 00:04:47.067 --rc genhtml_function_coverage=1 00:04:47.067 --rc genhtml_legend=1 00:04:47.067 --rc geninfo_all_blocks=1 00:04:47.067 --rc geninfo_unexecuted_blocks=1 00:04:47.067 00:04:47.067 ' 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.067 --rc genhtml_branch_coverage=1 00:04:47.067 --rc genhtml_function_coverage=1 00:04:47.067 --rc genhtml_legend=1 00:04:47.067 --rc geninfo_all_blocks=1 00:04:47.067 --rc geninfo_unexecuted_blocks=1 00:04:47.067 00:04:47.067 ' 00:04:47.067 10:21:47 version -- app/version.sh@17 -- # get_header_version major 00:04:47.067 10:21:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # cut -f2 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.067 10:21:47 version -- app/version.sh@17 -- # major=25 00:04:47.067 10:21:47 version -- app/version.sh@18 -- # get_header_version minor 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # cut -f2 00:04:47.067 10:21:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.067 10:21:47 version -- app/version.sh@18 -- # minor=1 00:04:47.067 10:21:47 version -- app/version.sh@19 -- # get_header_version patch 00:04:47.067 10:21:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # cut -f2 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.067 10:21:47 version -- app/version.sh@19 -- # patch=0 00:04:47.067 10:21:47 version -- app/version.sh@20 -- # get_header_version suffix 00:04:47.067 10:21:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # cut -f2 00:04:47.067 10:21:47 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.067 10:21:47 version -- app/version.sh@20 -- # suffix=-pre 00:04:47.067 10:21:47 version -- app/version.sh@22 -- # version=25.1 00:04:47.067 10:21:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:47.067 10:21:47 version -- app/version.sh@28 -- # version=25.1rc0 00:04:47.067 10:21:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:47.067 10:21:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:47.067 10:21:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:47.067 10:21:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:47.067 00:04:47.067 real 0m0.244s 00:04:47.067 user 0m0.149s 00:04:47.067 sys 0m0.134s 00:04:47.067 10:21:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.067 10:21:47 version -- common/autotest_common.sh@10 -- # set +x 00:04:47.067 ************************************ 00:04:47.067 END TEST version 00:04:47.067 ************************************ 00:04:47.068 10:21:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:47.068 10:21:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:47.068 10:21:47 -- spdk/autotest.sh@194 -- # uname -s 00:04:47.068 10:21:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:47.068 10:21:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:47.068 10:21:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:47.068 10:21:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:47.068 10:21:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:47.068 10:21:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:47.068 10:21:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.068 10:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.327 10:21:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:47.327 10:21:47 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:47.327 10:21:47 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:47.327 10:21:47 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:47.327 10:21:47 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:47.327 10:21:47 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:47.327 10:21:47 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:47.327 10:21:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.327 10:21:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.327 10:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.327 ************************************ 00:04:47.327 START TEST nvmf_tcp 00:04:47.327 ************************************ 00:04:47.327 10:21:47 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:47.327 * Looking for test storage... 00:04:47.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.327 10:21:47 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.327 10:21:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.327 10:21:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.327 10:21:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.327 10:21:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.327 --rc genhtml_branch_coverage=1 00:04:47.327 --rc genhtml_function_coverage=1 00:04:47.327 --rc genhtml_legend=1 00:04:47.327 --rc geninfo_all_blocks=1 00:04:47.327 --rc geninfo_unexecuted_blocks=1 00:04:47.327 00:04:47.327 ' 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.327 --rc genhtml_branch_coverage=1 00:04:47.327 --rc genhtml_function_coverage=1 00:04:47.327 --rc genhtml_legend=1 00:04:47.327 --rc geninfo_all_blocks=1 00:04:47.327 --rc geninfo_unexecuted_blocks=1 00:04:47.327 00:04:47.327 ' 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.327 --rc genhtml_branch_coverage=1 00:04:47.327 --rc genhtml_function_coverage=1 00:04:47.327 --rc genhtml_legend=1 00:04:47.327 --rc geninfo_all_blocks=1 00:04:47.327 --rc geninfo_unexecuted_blocks=1 00:04:47.327 00:04:47.327 ' 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.327 --rc genhtml_branch_coverage=1 00:04:47.327 --rc genhtml_function_coverage=1 00:04:47.327 --rc genhtml_legend=1 00:04:47.327 --rc geninfo_all_blocks=1 00:04:47.327 --rc geninfo_unexecuted_blocks=1 00:04:47.327 00:04:47.327 ' 00:04:47.327 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:47.327 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.327 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.327 10:21:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.327 ************************************ 00:04:47.327 START TEST nvmf_target_core 00:04:47.327 ************************************ 00:04:47.327 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.587 * Looking for test storage... 00:04:47.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.587 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.588 --rc genhtml_branch_coverage=1 00:04:47.588 --rc genhtml_function_coverage=1 00:04:47.588 --rc genhtml_legend=1 00:04:47.588 --rc geninfo_all_blocks=1 00:04:47.588 --rc geninfo_unexecuted_blocks=1 00:04:47.588 00:04:47.588 ' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.588 --rc genhtml_branch_coverage=1 00:04:47.588 --rc genhtml_function_coverage=1 00:04:47.588 --rc genhtml_legend=1 00:04:47.588 --rc geninfo_all_blocks=1 00:04:47.588 --rc geninfo_unexecuted_blocks=1 00:04:47.588 00:04:47.588 ' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.588 --rc genhtml_branch_coverage=1 00:04:47.588 --rc genhtml_function_coverage=1 00:04:47.588 --rc genhtml_legend=1 00:04:47.588 --rc geninfo_all_blocks=1 00:04:47.588 --rc geninfo_unexecuted_blocks=1 00:04:47.588 00:04:47.588 ' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.588 --rc genhtml_branch_coverage=1 00:04:47.588 --rc genhtml_function_coverage=1 00:04:47.588 --rc genhtml_legend=1 00:04:47.588 --rc geninfo_all_blocks=1 00:04:47.588 --rc geninfo_unexecuted_blocks=1 00:04:47.588 00:04:47.588 ' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:47.588 ************************************ 00:04:47.588 START TEST nvmf_abort 00:04:47.588 ************************************ 00:04:47.588 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.847 * Looking for test storage... 00:04:47.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:47.847 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.848 --rc genhtml_branch_coverage=1 00:04:47.848 --rc genhtml_function_coverage=1 00:04:47.848 --rc genhtml_legend=1 00:04:47.848 --rc geninfo_all_blocks=1 00:04:47.848 --rc geninfo_unexecuted_blocks=1 00:04:47.848 00:04:47.848 ' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.848 --rc genhtml_branch_coverage=1 00:04:47.848 --rc genhtml_function_coverage=1 00:04:47.848 --rc genhtml_legend=1 00:04:47.848 --rc geninfo_all_blocks=1 00:04:47.848 --rc geninfo_unexecuted_blocks=1 00:04:47.848 00:04:47.848 ' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.848 --rc genhtml_branch_coverage=1 00:04:47.848 --rc genhtml_function_coverage=1 00:04:47.848 --rc genhtml_legend=1 00:04:47.848 --rc geninfo_all_blocks=1 00:04:47.848 --rc geninfo_unexecuted_blocks=1 00:04:47.848 00:04:47.848 ' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.848 --rc genhtml_branch_coverage=1 00:04:47.848 --rc genhtml_function_coverage=1 00:04:47.848 --rc genhtml_legend=1 00:04:47.848 --rc geninfo_all_blocks=1 00:04:47.848 --rc geninfo_unexecuted_blocks=1 00:04:47.848 00:04:47.848 ' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:47.848 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.523 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:54.523 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:54.524 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:54.524 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:54.524 Found net devices under 0000:86:00.0: cvl_0_0 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:54.524 Found net devices under 0000:86:00.1: cvl_0_1 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:54.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:54.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:04:54.524 00:04:54.524 --- 10.0.0.2 ping statistics --- 00:04:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.524 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:54.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:54.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:04:54.524 00:04:54.524 --- 10.0.0.1 ping statistics --- 00:04:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.524 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:54.524 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3306807 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3306807 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3306807 ']' 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 [2024-11-20 10:21:54.601723] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:04:54.525 [2024-11-20 10:21:54.601768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:54.525 [2024-11-20 10:21:54.679823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.525 [2024-11-20 10:21:54.721045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:54.525 [2024-11-20 10:21:54.721085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:54.525 [2024-11-20 10:21:54.721092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.525 [2024-11-20 10:21:54.721098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.525 [2024-11-20 10:21:54.721103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:54.525 [2024-11-20 10:21:54.722565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.525 [2024-11-20 10:21:54.722672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.525 [2024-11-20 10:21:54.722673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 [2024-11-20 10:21:54.866909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 Malloc0 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 Delay0 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 [2024-11-20 10:21:54.937046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.525 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:54.525 [2024-11-20 10:21:55.073681] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:57.054 Initializing NVMe Controllers 00:04:57.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:57.054 controller IO queue size 128 less than required 00:04:57.054 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:57.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:57.054 Initialization complete. Launching workers. 00:04:57.054 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36538 00:04:57.054 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36603, failed to submit 62 00:04:57.054 success 36542, unsuccessful 61, failed 0 00:04:57.054 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:57.054 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.054 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.054 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:57.055 rmmod nvme_tcp 00:04:57.055 rmmod nvme_fabrics 00:04:57.055 rmmod nvme_keyring 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3306807 ']' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3306807 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3306807 ']' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3306807 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3306807 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3306807' 00:04:57.055 killing process with pid 3306807 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3306807 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3306807 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:57.055 10:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.960 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:58.960 00:04:58.960 real 0m11.373s 00:04:58.960 user 0m12.098s 00:04:58.960 sys 0m5.548s 00:04:58.960 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.960 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.960 ************************************ 00:04:58.960 END TEST nvmf_abort 00:04:58.960 ************************************ 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:59.220 ************************************ 00:04:59.220 START TEST nvmf_ns_hotplug_stress 00:04:59.220 ************************************ 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:59.220 * Looking for test storage... 00:04:59.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.220 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.221 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:59.480 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:06.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:06.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:06.053 Found net devices under 0000:86:00.0: cvl_0_0 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:06.053 Found net devices under 0000:86:00.1: cvl_0_1 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:06.053 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:06.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:06.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:05:06.054 00:05:06.054 --- 10.0.0.2 ping statistics --- 00:05:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.054 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:06.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:06.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:05:06.054 00:05:06.054 --- 10.0.0.1 ping statistics --- 00:05:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.054 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3311012 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:06.054 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3311012 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3311012 ']' 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.054 [2024-11-20 10:22:06.052504] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:05:06.054 [2024-11-20 10:22:06.052562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:06.054 [2024-11-20 10:22:06.130800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.054 [2024-11-20 10:22:06.173165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:06.054 [2024-11-20 10:22:06.173200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:06.054 [2024-11-20 10:22:06.173207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.054 [2024-11-20 10:22:06.173214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.054 [2024-11-20 10:22:06.173219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:06.054 [2024-11-20 10:22:06.174682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.054 [2024-11-20 10:22:06.174791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.054 [2024-11-20 10:22:06.174791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:06.054 [2024-11-20 10:22:06.483806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:06.054 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:06.312 [2024-11-20 10:22:06.877266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:06.312 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:06.570 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:06.828 Malloc0 00:05:06.828 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.828 Delay0 00:05:06.828 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.086 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:07.344 NULL1 00:05:07.344 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:07.602 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3311284 00:05:07.602 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:07.602 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:07.602 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.602 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.860 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:07.860 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:08.118 true 00:05:08.118 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:08.118 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.376 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.641 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:08.641 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:08.641 true 00:05:08.641 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:08.641 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.017 Read completed with error (sct=0, sc=11) 00:05:10.017 10:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.017 10:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:10.017 10:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:10.275 true 00:05:10.275 10:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:10.275 10:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.209 10:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.467 10:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:11.467 10:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:11.467 true 00:05:11.467 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:11.467 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.725 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.983 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:11.983 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:12.241 true 00:05:12.241 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:12.241 10:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.614 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.614 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:13.614 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:13.614 true 00:05:13.614 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:13.614 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.872 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.129 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:14.129 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:14.387 true 00:05:14.387 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:14.387 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.578 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.578 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:15.578 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:15.836 true 00:05:15.836 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:15.836 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.769 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.769 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:16.769 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:17.027 true 00:05:17.027 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:17.027 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.285 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.543 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:17.543 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:17.543 true 00:05:17.800 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:17.800 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.733 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.990 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:18.990 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:19.248 true 00:05:19.248 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:19.248 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.248 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.505 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:19.505 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:19.763 true 00:05:19.763 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:19.763 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.952 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.952 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:20.952 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:21.207 true 00:05:21.207 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:21.207 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.136 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.392 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:22.392 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:22.392 true 00:05:22.392 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:22.392 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.649 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.906 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:22.906 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:23.164 true 00:05:23.164 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:23.164 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.159 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.417 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:24.417 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:24.417 true 00:05:24.417 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:24.417 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.674 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.932 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:24.932 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:25.190 true 00:05:25.190 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:25.190 10:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.124 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.381 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:26.381 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:26.381 true 00:05:26.381 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:26.381 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.639 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.904 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:26.904 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:27.163 true 00:05:27.163 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:27.163 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.097 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.355 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:28.355 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:28.612 true 00:05:28.612 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:28.612 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.544 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.544 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:29.544 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:29.802 true 00:05:29.802 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:29.802 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.059 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.317 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:30.317 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:30.317 true 00:05:30.575 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:30.575 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.507 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.765 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:31.765 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:32.023 true 00:05:32.023 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:32.023 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.281 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.281 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:32.281 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:32.539 true 00:05:32.539 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:32.539 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.912 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.912 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:33.912 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:34.170 true 00:05:34.170 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:34.170 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.103 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.103 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:35.103 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:35.394 true 00:05:35.394 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:35.394 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.653 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.653 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:35.653 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:35.911 true 00:05:35.911 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:35.911 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.284 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.285 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:37.285 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:37.542 true 00:05:37.543 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:37.543 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.476 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.476 Initializing NVMe Controllers 00:05:38.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:38.476 Controller IO queue size 128, less than required. 00:05:38.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:38.476 Controller IO queue size 128, less than required. 00:05:38.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:38.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:38.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:38.476 Initialization complete. Launching workers. 00:05:38.476 ======================================================== 00:05:38.476 Latency(us) 00:05:38.476 Device Information : IOPS MiB/s Average min max 00:05:38.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1675.47 0.82 49316.16 2619.37 1014180.96 00:05:38.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16316.00 7.97 7821.89 2310.95 381042.45 00:05:38.476 ======================================================== 00:05:38.476 Total : 17991.47 8.78 11686.07 2310.95 1014180.96 00:05:38.476 00:05:38.476 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:38.476 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:38.734 true 00:05:38.734 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3311284 00:05:38.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3311284) - No such process 00:05:38.734 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3311284 00:05:38.734 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.992 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.992 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:38.992 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:38.992 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:38.992 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.992 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:39.250 null0 00:05:39.250 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.250 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.250 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:39.507 null1 00:05:39.508 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.508 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.508 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:39.766 null2 00:05:39.766 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.766 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.766 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:39.766 null3 00:05:40.024 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.024 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.024 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:40.024 null4 00:05:40.024 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.024 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.024 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:40.281 null5 00:05:40.281 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.281 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.281 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:40.539 null6 00:05:40.539 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.539 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.539 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:40.798 null7 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:40.798 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3317106 3317107 3317109 3317111 3317113 3317115 3317117 3317119 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.799 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.316 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.574 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.832 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.833 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.090 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.349 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.349 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.349 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.608 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.866 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.124 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.383 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.383 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.641 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.901 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.160 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.419 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.419 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.420 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.677 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:44.935 rmmod nvme_tcp 00:05:44.935 rmmod nvme_fabrics 00:05:44.935 rmmod nvme_keyring 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3311012 ']' 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3311012 00:05:44.935 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3311012 ']' 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3311012 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3311012 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3311012' 00:05:45.194 killing process with pid 3311012 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3311012 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3311012 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.194 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.731 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:47.731 00:05:47.731 real 0m48.222s 00:05:47.731 user 3m15.981s 00:05:47.731 sys 0m15.662s 00:05:47.731 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.731 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:47.731 ************************************ 00:05:47.731 END TEST nvmf_ns_hotplug_stress 00:05:47.732 ************************************ 00:05:47.732 10:22:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:47.732 10:22:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:47.732 10:22:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.732 10:22:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.732 ************************************ 00:05:47.732 START TEST nvmf_delete_subsystem 00:05:47.732 ************************************ 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:47.732 * Looking for test storage... 00:05:47.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.732 --rc genhtml_branch_coverage=1 00:05:47.732 --rc genhtml_function_coverage=1 00:05:47.732 --rc genhtml_legend=1 00:05:47.732 --rc geninfo_all_blocks=1 00:05:47.732 --rc geninfo_unexecuted_blocks=1 00:05:47.732 00:05:47.732 ' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.732 --rc genhtml_branch_coverage=1 00:05:47.732 --rc genhtml_function_coverage=1 00:05:47.732 --rc genhtml_legend=1 00:05:47.732 --rc geninfo_all_blocks=1 00:05:47.732 --rc geninfo_unexecuted_blocks=1 00:05:47.732 00:05:47.732 ' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.732 --rc genhtml_branch_coverage=1 00:05:47.732 --rc genhtml_function_coverage=1 00:05:47.732 --rc genhtml_legend=1 00:05:47.732 --rc geninfo_all_blocks=1 00:05:47.732 --rc geninfo_unexecuted_blocks=1 00:05:47.732 00:05:47.732 ' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.732 --rc genhtml_branch_coverage=1 00:05:47.732 --rc genhtml_function_coverage=1 00:05:47.732 --rc genhtml_legend=1 00:05:47.732 --rc geninfo_all_blocks=1 00:05:47.732 --rc geninfo_unexecuted_blocks=1 00:05:47.732 00:05:47.732 ' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:47.732 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.733 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:54.304 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:54.304 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:54.304 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:54.304 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:54.304 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:54.304 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:54.305 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:54.305 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:54.305 Found net devices under 0000:86:00.0: cvl_0_0 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:54.305 Found net devices under 0000:86:00.1: cvl_0_1 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:54.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:54.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:05:54.305 00:05:54.305 --- 10.0.0.2 ping statistics --- 00:05:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.305 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:54.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:54.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:05:54.305 00:05:54.305 --- 10.0.0.1 ping statistics --- 00:05:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.305 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3321517 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3321517 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:54.305 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3321517 ']' 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 [2024-11-20 10:22:54.335872] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:05:54.306 [2024-11-20 10:22:54.335922] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.306 [2024-11-20 10:22:54.414714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.306 [2024-11-20 10:22:54.456722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.306 [2024-11-20 10:22:54.456757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.306 [2024-11-20 10:22:54.456765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.306 [2024-11-20 10:22:54.456770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.306 [2024-11-20 10:22:54.456776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.306 [2024-11-20 10:22:54.457962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.306 [2024-11-20 10:22:54.457964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 [2024-11-20 10:22:54.598423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 [2024-11-20 10:22:54.618636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 NULL1 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 Delay0 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3321540 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:54.306 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:54.306 [2024-11-20 10:22:54.730686] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:56.204 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:56.204 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.204 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 [2024-11-20 10:22:56.902323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22254a0 is same with the state(6) to be set 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 starting I/O failed: -6 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Read completed with error (sct=0, sc=8) 00:05:56.204 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 Write completed with error (sct=0, sc=8) 00:05:56.205 starting I/O failed: -6 00:05:56.205 Read completed with error (sct=0, sc=8) 00:05:56.205 [2024-11-20 10:22:56.903468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1eb4000c40 is same with the state(6) to be set 00:05:57.579 [2024-11-20 10:22:57.868598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22269a0 is same with the state(6) to be set 00:05:57.579 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 [2024-11-20 10:22:57.903593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2225680 is same with the state(6) to be set 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 [2024-11-20 10:22:57.903907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1eb400d800 is same with the state(6) to be set 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 [2024-11-20 10:22:57.904094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1eb400d020 is same with the state(6) to be set 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Write completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 Read completed with error (sct=0, sc=8) 00:05:57.580 [2024-11-20 10:22:57.905253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22252c0 is same with the state(6) to be set 00:05:57.580 Initializing NVMe Controllers 00:05:57.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.580 Controller IO queue size 128, less than required. 00:05:57.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:57.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:57.581 Initialization complete. Launching workers. 00:05:57.581 ======================================================== 00:05:57.581 Latency(us) 00:05:57.581 Device Information : IOPS MiB/s Average min max 00:05:57.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.18 0.08 892102.62 446.76 1011708.70 00:05:57.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 190.04 0.09 894765.53 406.99 1012147.14 00:05:57.581 ======================================================== 00:05:57.581 Total : 361.22 0.18 893503.58 406.99 1012147.14 00:05:57.581 00:05:57.581 [2024-11-20 10:22:57.905588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22269a0 (9): Bad file descriptor 00:05:57.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:57.581 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.581 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:57.581 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3321540 00:05:57.581 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3321540 00:05:57.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3321540) - No such process 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3321540 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3321540 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3321540 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.840 [2024-11-20 10:22:58.438566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3322231 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:05:57.840 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.840 [2024-11-20 10:22:58.524008] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:58.406 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.406 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:05:58.406 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.973 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.973 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:05:58.973 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.539 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.539 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:05:59.539 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.797 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.797 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:05:59.797 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.419 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.419 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:06:00.419 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.023 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.023 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:06:01.023 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.023 Initializing NVMe Controllers 00:06:01.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.023 Controller IO queue size 128, less than required. 00:06:01.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:01.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:01.023 Initialization complete. Launching workers. 00:06:01.023 ======================================================== 00:06:01.023 Latency(us) 00:06:01.023 Device Information : IOPS MiB/s Average min max 00:06:01.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002533.80 1000145.83 1008583.25 00:06:01.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003513.93 1000168.08 1011475.68 00:06:01.023 ======================================================== 00:06:01.023 Total : 256.00 0.12 1003023.87 1000145.83 1011475.68 00:06:01.023 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3322231 00:06:01.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3322231) - No such process 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3322231 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:01.282 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:01.282 rmmod nvme_tcp 00:06:01.282 rmmod nvme_fabrics 00:06:01.541 rmmod nvme_keyring 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3321517 ']' 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3321517 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3321517 ']' 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3321517 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3321517 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3321517' 00:06:01.541 killing process with pid 3321517 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3321517 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3321517 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.541 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.801 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:03.710 00:06:03.710 real 0m16.300s 00:06:03.710 user 0m29.260s 00:06:03.710 sys 0m5.606s 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.710 ************************************ 00:06:03.710 END TEST nvmf_delete_subsystem 00:06:03.710 ************************************ 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:03.710 ************************************ 00:06:03.710 START TEST nvmf_host_management 00:06:03.710 ************************************ 00:06:03.710 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:03.970 * Looking for test storage... 00:06:03.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:03.970 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.971 --rc genhtml_branch_coverage=1 00:06:03.971 --rc genhtml_function_coverage=1 00:06:03.971 --rc genhtml_legend=1 00:06:03.971 --rc geninfo_all_blocks=1 00:06:03.971 --rc geninfo_unexecuted_blocks=1 00:06:03.971 00:06:03.971 ' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.971 --rc genhtml_branch_coverage=1 00:06:03.971 --rc genhtml_function_coverage=1 00:06:03.971 --rc genhtml_legend=1 00:06:03.971 --rc geninfo_all_blocks=1 00:06:03.971 --rc geninfo_unexecuted_blocks=1 00:06:03.971 00:06:03.971 ' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.971 --rc genhtml_branch_coverage=1 00:06:03.971 --rc genhtml_function_coverage=1 00:06:03.971 --rc genhtml_legend=1 00:06:03.971 --rc geninfo_all_blocks=1 00:06:03.971 --rc geninfo_unexecuted_blocks=1 00:06:03.971 00:06:03.971 ' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.971 --rc genhtml_branch_coverage=1 00:06:03.971 --rc genhtml_function_coverage=1 00:06:03.971 --rc genhtml_legend=1 00:06:03.971 --rc geninfo_all_blocks=1 00:06:03.971 --rc geninfo_unexecuted_blocks=1 00:06:03.971 00:06:03.971 ' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:03.971 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.546 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:10.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:10.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:10.547 Found net devices under 0000:86:00.0: cvl_0_0 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:10.547 Found net devices under 0000:86:00.1: cvl_0_1 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:10.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:06:10.547 00:06:10.547 --- 10.0.0.2 ping statistics --- 00:06:10.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.547 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:06:10.547 00:06:10.547 --- 10.0.0.1 ping statistics --- 00:06:10.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.547 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3326466 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3326466 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3326466 ']' 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.547 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 [2024-11-20 10:23:10.689118] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:06:10.548 [2024-11-20 10:23:10.689168] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.548 [2024-11-20 10:23:10.772060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.548 [2024-11-20 10:23:10.814736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.548 [2024-11-20 10:23:10.814778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.548 [2024-11-20 10:23:10.814785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.548 [2024-11-20 10:23:10.814792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.548 [2024-11-20 10:23:10.814797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.548 [2024-11-20 10:23:10.816408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.548 [2024-11-20 10:23:10.816516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.548 [2024-11-20 10:23:10.816623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.548 [2024-11-20 10:23:10.816623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 [2024-11-20 10:23:10.961707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.548 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 Malloc0 00:06:10.548 [2024-11-20 10:23:11.045531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3326510 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3326510 /var/tmp/bdevperf.sock 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3326510 ']' 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:10.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:10.548 { 00:06:10.548 "params": { 00:06:10.548 "name": "Nvme$subsystem", 00:06:10.548 "trtype": "$TEST_TRANSPORT", 00:06:10.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:10.548 "adrfam": "ipv4", 00:06:10.548 "trsvcid": "$NVMF_PORT", 00:06:10.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:10.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:10.548 "hdgst": ${hdgst:-false}, 00:06:10.548 "ddgst": ${ddgst:-false} 00:06:10.548 }, 00:06:10.548 "method": "bdev_nvme_attach_controller" 00:06:10.548 } 00:06:10.548 EOF 00:06:10.548 )") 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:10.548 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:10.548 "params": { 00:06:10.548 "name": "Nvme0", 00:06:10.548 "trtype": "tcp", 00:06:10.548 "traddr": "10.0.0.2", 00:06:10.548 "adrfam": "ipv4", 00:06:10.548 "trsvcid": "4420", 00:06:10.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:10.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:10.548 "hdgst": false, 00:06:10.548 "ddgst": false 00:06:10.548 }, 00:06:10.548 "method": "bdev_nvme_attach_controller" 00:06:10.548 }' 00:06:10.548 [2024-11-20 10:23:11.139145] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:06:10.548 [2024-11-20 10:23:11.139189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326510 ] 00:06:10.548 [2024-11-20 10:23:11.216585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.548 [2024-11-20 10:23:11.257895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.114 Running I/O for 10 seconds... 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.114 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:11.115 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=672 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 672 -ge 100 ']' 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.374 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.374 [2024-11-20 10:23:11.988801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dd200 is same with the state(6) to be set 00:06:11.374 [2024-11-20 10:23:11.989015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.374 [2024-11-20 10:23:11.989048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.374 [2024-11-20 10:23:11.989065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.374 [2024-11-20 10:23:11.989073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.374 [2024-11-20 10:23:11.989083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.374 [2024-11-20 10:23:11.989090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.375 [2024-11-20 10:23:11.989609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.375 [2024-11-20 10:23:11.989618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.989985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.989992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.990000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.990016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.990022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.990031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.990038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.990047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.990054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.990062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.376 [2024-11-20 10:23:11.990069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.376 [2024-11-20 10:23:11.991027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:11.376 task offset: 98688 on job bdev=Nvme0n1 fails 00:06:11.376 00:06:11.376 Latency(us) 00:06:11.376 [2024-11-20T09:23:12.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.376 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:11.376 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:11.376 Verification LBA range: start 0x0 length 0x400 00:06:11.376 Nvme0n1 : 0.41 1895.36 118.46 157.95 0.00 30323.54 1453.19 27696.08 00:06:11.376 [2024-11-20T09:23:12.107Z] =================================================================================================================== 00:06:11.376 [2024-11-20T09:23:12.107Z] Total : 1895.36 118.46 157.95 0.00 30323.54 1453.19 27696.08 00:06:11.377 [2024-11-20 10:23:11.993435] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.377 [2024-11-20 10:23:11.993457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325500 (9): Bad file descriptor 00:06:11.377 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.377 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:11.377 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.377 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.377 [2024-11-20 10:23:12.000639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:11.377 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.377 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3326510 00:06:12.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3326510) - No such process 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:12.311 { 00:06:12.311 "params": { 00:06:12.311 "name": "Nvme$subsystem", 00:06:12.311 "trtype": "$TEST_TRANSPORT", 00:06:12.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:12.311 "adrfam": "ipv4", 00:06:12.311 "trsvcid": "$NVMF_PORT", 00:06:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:12.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:12.311 "hdgst": ${hdgst:-false}, 00:06:12.311 "ddgst": ${ddgst:-false} 00:06:12.311 }, 00:06:12.311 "method": "bdev_nvme_attach_controller" 00:06:12.311 } 00:06:12.311 EOF 00:06:12.311 )") 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:12.311 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:12.311 "params": { 00:06:12.311 "name": "Nvme0", 00:06:12.311 "trtype": "tcp", 00:06:12.311 "traddr": "10.0.0.2", 00:06:12.311 "adrfam": "ipv4", 00:06:12.311 "trsvcid": "4420", 00:06:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:12.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:12.311 "hdgst": false, 00:06:12.311 "ddgst": false 00:06:12.311 }, 00:06:12.311 "method": "bdev_nvme_attach_controller" 00:06:12.311 }' 00:06:12.569 [2024-11-20 10:23:13.055601] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:06:12.569 [2024-11-20 10:23:13.055651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326893 ] 00:06:12.569 [2024-11-20 10:23:13.131599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.569 [2024-11-20 10:23:13.170717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.826 Running I/O for 1 seconds... 00:06:13.760 1982.00 IOPS, 123.88 MiB/s 00:06:13.760 Latency(us) 00:06:13.760 [2024-11-20T09:23:14.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:13.760 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:13.760 Verification LBA range: start 0x0 length 0x400 00:06:13.760 Nvme0n1 : 1.02 2012.31 125.77 0.00 0.00 31301.45 4929.45 27468.13 00:06:13.760 [2024-11-20T09:23:14.491Z] =================================================================================================================== 00:06:13.760 [2024-11-20T09:23:14.491Z] Total : 2012.31 125.77 0.00 0.00 31301.45 4929.45 27468.13 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:14.019 rmmod nvme_tcp 00:06:14.019 rmmod nvme_fabrics 00:06:14.019 rmmod nvme_keyring 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3326466 ']' 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3326466 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3326466 ']' 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3326466 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.019 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3326466 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3326466' 00:06:14.278 killing process with pid 3326466 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3326466 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3326466 00:06:14.278 [2024-11-20 10:23:14.920105] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:14.278 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.279 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.279 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:16.816 00:06:16.816 real 0m12.607s 00:06:16.816 user 0m20.549s 00:06:16.816 sys 0m5.625s 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.816 ************************************ 00:06:16.816 END TEST nvmf_host_management 00:06:16.816 ************************************ 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.816 ************************************ 00:06:16.816 START TEST nvmf_lvol 00:06:16.816 ************************************ 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:16.816 * Looking for test storage... 00:06:16.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:16.816 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.817 --rc genhtml_branch_coverage=1 00:06:16.817 --rc genhtml_function_coverage=1 00:06:16.817 --rc genhtml_legend=1 00:06:16.817 --rc geninfo_all_blocks=1 00:06:16.817 --rc geninfo_unexecuted_blocks=1 00:06:16.817 00:06:16.817 ' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.817 --rc genhtml_branch_coverage=1 00:06:16.817 --rc genhtml_function_coverage=1 00:06:16.817 --rc genhtml_legend=1 00:06:16.817 --rc geninfo_all_blocks=1 00:06:16.817 --rc geninfo_unexecuted_blocks=1 00:06:16.817 00:06:16.817 ' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.817 --rc genhtml_branch_coverage=1 00:06:16.817 --rc genhtml_function_coverage=1 00:06:16.817 --rc genhtml_legend=1 00:06:16.817 --rc geninfo_all_blocks=1 00:06:16.817 --rc geninfo_unexecuted_blocks=1 00:06:16.817 00:06:16.817 ' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.817 --rc genhtml_branch_coverage=1 00:06:16.817 --rc genhtml_function_coverage=1 00:06:16.817 --rc genhtml_legend=1 00:06:16.817 --rc geninfo_all_blocks=1 00:06:16.817 --rc geninfo_unexecuted_blocks=1 00:06:16.817 00:06:16.817 ' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.817 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:23.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.390 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:23.391 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:23.391 Found net devices under 0000:86:00.0: cvl_0_0 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:23.391 Found net devices under 0000:86:00.1: cvl_0_1 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:06:23.391 00:06:23.391 --- 10.0.0.2 ping statistics --- 00:06:23.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.391 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:06:23.391 00:06:23.391 --- 10.0.0.1 ping statistics --- 00:06:23.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.391 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3330757 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3330757 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3330757 ']' 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.391 [2024-11-20 10:23:23.388888] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:06:23.391 [2024-11-20 10:23:23.388962] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.391 [2024-11-20 10:23:23.469372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.391 [2024-11-20 10:23:23.511787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.391 [2024-11-20 10:23:23.511822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.391 [2024-11-20 10:23:23.511829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.391 [2024-11-20 10:23:23.511835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.391 [2024-11-20 10:23:23.511840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.391 [2024-11-20 10:23:23.513236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.391 [2024-11-20 10:23:23.513341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.391 [2024-11-20 10:23:23.513342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.391 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.392 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:23.392 [2024-11-20 10:23:23.814435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.392 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:23.392 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:23.392 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:23.650 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:23.650 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:23.909 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:24.168 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b4cba600-77b9-476c-9e17-01442edb1c97 00:06:24.169 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b4cba600-77b9-476c-9e17-01442edb1c97 lvol 20 00:06:24.428 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0ff6c3b2-4bb0-408e-bb7d-5fdf5699de6a 00:06:24.428 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:24.428 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ff6c3b2-4bb0-408e-bb7d-5fdf5699de6a 00:06:24.687 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:24.946 [2024-11-20 10:23:25.491989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.946 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:25.204 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3331244 00:06:25.204 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:25.204 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:26.137 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0ff6c3b2-4bb0-408e-bb7d-5fdf5699de6a MY_SNAPSHOT 00:06:26.394 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5e56815d-731a-4033-92e0-6081be38c78e 00:06:26.394 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0ff6c3b2-4bb0-408e-bb7d-5fdf5699de6a 30 00:06:26.651 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5e56815d-731a-4033-92e0-6081be38c78e MY_CLONE 00:06:26.907 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=946b0062-6274-40da-8a41-ad691335a90f 00:06:26.907 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 946b0062-6274-40da-8a41-ad691335a90f 00:06:27.468 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3331244 00:06:35.612 Initializing NVMe Controllers 00:06:35.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:35.612 Controller IO queue size 128, less than required. 00:06:35.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:35.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:35.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:35.612 Initialization complete. Launching workers. 00:06:35.612 ======================================================== 00:06:35.612 Latency(us) 00:06:35.612 Device Information : IOPS MiB/s Average min max 00:06:35.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11797.86 46.09 10848.39 1165.66 110770.71 00:06:35.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11718.56 45.78 10924.81 3559.25 45451.78 00:06:35.612 ======================================================== 00:06:35.612 Total : 23516.42 91.86 10886.47 1165.66 110770.71 00:06:35.612 00:06:35.612 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:35.871 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ff6c3b2-4bb0-408e-bb7d-5fdf5699de6a 00:06:35.871 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b4cba600-77b9-476c-9e17-01442edb1c97 00:06:36.130 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:36.130 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.131 rmmod nvme_tcp 00:06:36.131 rmmod nvme_fabrics 00:06:36.131 rmmod nvme_keyring 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3330757 ']' 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3330757 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3330757 ']' 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3330757 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.131 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3330757 00:06:36.390 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.390 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.390 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3330757' 00:06:36.390 killing process with pid 3330757 00:06:36.390 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3330757 00:06:36.390 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3330757 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.390 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.927 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.927 00:06:38.927 real 0m22.073s 00:06:38.927 user 1m3.338s 00:06:38.927 sys 0m7.774s 00:06:38.927 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.927 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.927 ************************************ 00:06:38.927 END TEST nvmf_lvol 00:06:38.927 ************************************ 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.928 ************************************ 00:06:38.928 START TEST nvmf_lvs_grow 00:06:38.928 ************************************ 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:38.928 * Looking for test storage... 00:06:38.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.928 --rc genhtml_branch_coverage=1 00:06:38.928 --rc genhtml_function_coverage=1 00:06:38.928 --rc genhtml_legend=1 00:06:38.928 --rc geninfo_all_blocks=1 00:06:38.928 --rc geninfo_unexecuted_blocks=1 00:06:38.928 00:06:38.928 ' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.928 --rc genhtml_branch_coverage=1 00:06:38.928 --rc genhtml_function_coverage=1 00:06:38.928 --rc genhtml_legend=1 00:06:38.928 --rc geninfo_all_blocks=1 00:06:38.928 --rc geninfo_unexecuted_blocks=1 00:06:38.928 00:06:38.928 ' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.928 --rc genhtml_branch_coverage=1 00:06:38.928 --rc genhtml_function_coverage=1 00:06:38.928 --rc genhtml_legend=1 00:06:38.928 --rc geninfo_all_blocks=1 00:06:38.928 --rc geninfo_unexecuted_blocks=1 00:06:38.928 00:06:38.928 ' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.928 --rc genhtml_branch_coverage=1 00:06:38.928 --rc genhtml_function_coverage=1 00:06:38.928 --rc genhtml_legend=1 00:06:38.928 --rc geninfo_all_blocks=1 00:06:38.928 --rc geninfo_unexecuted_blocks=1 00:06:38.928 00:06:38.928 ' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.928 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.929 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.500 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:45.501 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:45.501 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:45.501 Found net devices under 0000:86:00.0: cvl_0_0 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:45.501 Found net devices under 0000:86:00.1: cvl_0_1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:06:45.501 00:06:45.501 --- 10.0.0.2 ping statistics --- 00:06:45.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.501 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:06:45.501 00:06:45.501 --- 10.0.0.1 ping statistics --- 00:06:45.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.501 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3336641 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3336641 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3336641 ']' 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.501 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.501 [2024-11-20 10:23:45.548882] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:06:45.501 [2024-11-20 10:23:45.548927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.501 [2024-11-20 10:23:45.628446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.501 [2024-11-20 10:23:45.669747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.501 [2024-11-20 10:23:45.669785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.501 [2024-11-20 10:23:45.669792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.501 [2024-11-20 10:23:45.669798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.501 [2024-11-20 10:23:45.669803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.502 [2024-11-20 10:23:45.670387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.502 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:45.502 [2024-11-20 10:23:45.978667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.502 ************************************ 00:06:45.502 START TEST lvs_grow_clean 00:06:45.502 ************************************ 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.502 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:45.761 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:45.761 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:45.761 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:45.761 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:45.761 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:46.020 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:46.020 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:46.020 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 41e89aad-b3ea-4b01-9874-5b87f10e145e lvol 150 00:06:46.279 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3dc5742d-a72f-4216-9c26-e384c31322aa 00:06:46.279 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.279 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:46.538 [2024-11-20 10:23:47.026658] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:46.538 [2024-11-20 10:23:47.026714] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:46.538 true 00:06:46.538 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:46.538 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:46.538 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:46.539 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.798 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3dc5742d-a72f-4216-9c26-e384c31322aa 00:06:47.120 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.120 [2024-11-20 10:23:47.797127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.422 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3337141 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3337141 /var/tmp/bdevperf.sock 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3337141 ']' 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:47.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:47.422 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.423 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:47.423 [2024-11-20 10:23:48.062759] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:06:47.423 [2024-11-20 10:23:48.062808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337141 ] 00:06:47.423 [2024-11-20 10:23:48.137856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.685 [2024-11-20 10:23:48.181583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.685 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.685 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:47.685 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:47.943 Nvme0n1 00:06:47.943 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:48.203 [ 00:06:48.203 { 00:06:48.203 "name": "Nvme0n1", 00:06:48.203 "aliases": [ 00:06:48.203 "3dc5742d-a72f-4216-9c26-e384c31322aa" 00:06:48.203 ], 00:06:48.203 "product_name": "NVMe disk", 00:06:48.203 "block_size": 4096, 00:06:48.203 "num_blocks": 38912, 00:06:48.203 "uuid": "3dc5742d-a72f-4216-9c26-e384c31322aa", 00:06:48.203 "numa_id": 1, 00:06:48.203 "assigned_rate_limits": { 00:06:48.203 "rw_ios_per_sec": 0, 00:06:48.203 "rw_mbytes_per_sec": 0, 00:06:48.203 "r_mbytes_per_sec": 0, 00:06:48.203 "w_mbytes_per_sec": 0 00:06:48.203 }, 00:06:48.203 "claimed": false, 00:06:48.203 "zoned": false, 00:06:48.203 "supported_io_types": { 00:06:48.203 "read": true, 00:06:48.203 "write": true, 00:06:48.203 "unmap": true, 00:06:48.203 "flush": true, 00:06:48.203 "reset": true, 00:06:48.203 "nvme_admin": true, 00:06:48.203 "nvme_io": true, 00:06:48.203 "nvme_io_md": false, 00:06:48.203 "write_zeroes": true, 00:06:48.203 "zcopy": false, 00:06:48.203 "get_zone_info": false, 00:06:48.203 "zone_management": false, 00:06:48.203 "zone_append": false, 00:06:48.203 "compare": true, 00:06:48.203 "compare_and_write": true, 00:06:48.203 "abort": true, 00:06:48.203 "seek_hole": false, 00:06:48.203 "seek_data": false, 00:06:48.203 "copy": true, 00:06:48.203 "nvme_iov_md": false 00:06:48.203 }, 00:06:48.203 "memory_domains": [ 00:06:48.203 { 00:06:48.203 "dma_device_id": "system", 00:06:48.203 "dma_device_type": 1 00:06:48.203 } 00:06:48.203 ], 00:06:48.203 "driver_specific": { 00:06:48.203 "nvme": [ 00:06:48.203 { 00:06:48.203 "trid": { 00:06:48.203 "trtype": "TCP", 00:06:48.203 "adrfam": "IPv4", 00:06:48.203 "traddr": "10.0.0.2", 00:06:48.203 "trsvcid": "4420", 00:06:48.203 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:48.203 }, 00:06:48.203 "ctrlr_data": { 00:06:48.203 "cntlid": 1, 00:06:48.203 "vendor_id": "0x8086", 00:06:48.203 "model_number": "SPDK bdev Controller", 00:06:48.203 "serial_number": "SPDK0", 00:06:48.203 "firmware_revision": "25.01", 00:06:48.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.203 "oacs": { 00:06:48.203 "security": 0, 00:06:48.203 "format": 0, 00:06:48.203 "firmware": 0, 00:06:48.203 "ns_manage": 0 00:06:48.203 }, 00:06:48.203 "multi_ctrlr": true, 00:06:48.203 "ana_reporting": false 00:06:48.203 }, 00:06:48.203 "vs": { 00:06:48.203 "nvme_version": "1.3" 00:06:48.203 }, 00:06:48.203 "ns_data": { 00:06:48.203 "id": 1, 00:06:48.203 "can_share": true 00:06:48.203 } 00:06:48.203 } 00:06:48.203 ], 00:06:48.203 "mp_policy": "active_passive" 00:06:48.203 } 00:06:48.203 } 00:06:48.203 ] 00:06:48.203 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3337156 00:06:48.203 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:48.203 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:48.203 Running I/O for 10 seconds... 00:06:49.140 Latency(us) 00:06:49.140 [2024-11-20T09:23:49.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.140 Nvme0n1 : 1.00 22767.00 88.93 0.00 0.00 0.00 0.00 0.00 00:06:49.140 [2024-11-20T09:23:49.871Z] =================================================================================================================== 00:06:49.140 [2024-11-20T09:23:49.871Z] Total : 22767.00 88.93 0.00 0.00 0.00 0.00 0.00 00:06:49.140 00:06:50.077 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:50.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.336 Nvme0n1 : 2.00 22850.00 89.26 0.00 0.00 0.00 0.00 0.00 00:06:50.336 [2024-11-20T09:23:51.067Z] =================================================================================================================== 00:06:50.336 [2024-11-20T09:23:51.067Z] Total : 22850.00 89.26 0.00 0.00 0.00 0.00 0.00 00:06:50.336 00:06:50.336 true 00:06:50.336 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:50.336 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:50.595 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:50.595 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:50.595 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3337156 00:06:51.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.163 Nvme0n1 : 3.00 22962.33 89.70 0.00 0.00 0.00 0.00 0.00 00:06:51.163 [2024-11-20T09:23:51.894Z] =================================================================================================================== 00:06:51.163 [2024-11-20T09:23:51.894Z] Total : 22962.33 89.70 0.00 0.00 0.00 0.00 0.00 00:06:51.163 00:06:52.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.540 Nvme0n1 : 4.00 23036.50 89.99 0.00 0.00 0.00 0.00 0.00 00:06:52.540 [2024-11-20T09:23:53.271Z] =================================================================================================================== 00:06:52.540 [2024-11-20T09:23:53.271Z] Total : 23036.50 89.99 0.00 0.00 0.00 0.00 0.00 00:06:52.540 00:06:53.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.477 Nvme0n1 : 5.00 23104.00 90.25 0.00 0.00 0.00 0.00 0.00 00:06:53.477 [2024-11-20T09:23:54.208Z] =================================================================================================================== 00:06:53.477 [2024-11-20T09:23:54.208Z] Total : 23104.00 90.25 0.00 0.00 0.00 0.00 0.00 00:06:53.477 00:06:54.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.414 Nvme0n1 : 6.00 23139.67 90.39 0.00 0.00 0.00 0.00 0.00 00:06:54.414 [2024-11-20T09:23:55.145Z] =================================================================================================================== 00:06:54.414 [2024-11-20T09:23:55.145Z] Total : 23139.67 90.39 0.00 0.00 0.00 0.00 0.00 00:06:54.414 00:06:55.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.351 Nvme0n1 : 7.00 23161.71 90.48 0.00 0.00 0.00 0.00 0.00 00:06:55.351 [2024-11-20T09:23:56.082Z] =================================================================================================================== 00:06:55.351 [2024-11-20T09:23:56.082Z] Total : 23161.71 90.48 0.00 0.00 0.00 0.00 0.00 00:06:55.351 00:06:56.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.290 Nvme0n1 : 8.00 23196.50 90.61 0.00 0.00 0.00 0.00 0.00 00:06:56.290 [2024-11-20T09:23:57.021Z] =================================================================================================================== 00:06:56.290 [2024-11-20T09:23:57.021Z] Total : 23196.50 90.61 0.00 0.00 0.00 0.00 0.00 00:06:56.290 00:06:57.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.228 Nvme0n1 : 9.00 23189.56 90.58 0.00 0.00 0.00 0.00 0.00 00:06:57.228 [2024-11-20T09:23:57.959Z] =================================================================================================================== 00:06:57.228 [2024-11-20T09:23:57.959Z] Total : 23189.56 90.58 0.00 0.00 0.00 0.00 0.00 00:06:57.228 00:06:58.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.166 Nvme0n1 : 10.00 23197.80 90.62 0.00 0.00 0.00 0.00 0.00 00:06:58.166 [2024-11-20T09:23:58.897Z] =================================================================================================================== 00:06:58.166 [2024-11-20T09:23:58.897Z] Total : 23197.80 90.62 0.00 0.00 0.00 0.00 0.00 00:06:58.166 00:06:58.166 00:06:58.166 Latency(us) 00:06:58.166 [2024-11-20T09:23:58.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.166 Nvme0n1 : 10.01 23196.29 90.61 0.00 0.00 5515.33 3191.32 12024.43 00:06:58.166 [2024-11-20T09:23:58.897Z] =================================================================================================================== 00:06:58.166 [2024-11-20T09:23:58.897Z] Total : 23196.29 90.61 0.00 0.00 5515.33 3191.32 12024.43 00:06:58.166 { 00:06:58.166 "results": [ 00:06:58.166 { 00:06:58.166 "job": "Nvme0n1", 00:06:58.166 "core_mask": "0x2", 00:06:58.166 "workload": "randwrite", 00:06:58.166 "status": "finished", 00:06:58.166 "queue_depth": 128, 00:06:58.166 "io_size": 4096, 00:06:58.166 "runtime": 10.006171, 00:06:58.166 "iops": 23196.285572173412, 00:06:58.166 "mibps": 90.61049051630239, 00:06:58.166 "io_failed": 0, 00:06:58.166 "io_timeout": 0, 00:06:58.166 "avg_latency_us": 5515.325744167115, 00:06:58.166 "min_latency_us": 3191.318260869565, 00:06:58.166 "max_latency_us": 12024.431304347827 00:06:58.166 } 00:06:58.166 ], 00:06:58.166 "core_count": 1 00:06:58.166 } 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3337141 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3337141 ']' 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3337141 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337141 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337141' 00:06:58.426 killing process with pid 3337141 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3337141 00:06:58.426 Received shutdown signal, test time was about 10.000000 seconds 00:06:58.426 00:06:58.426 Latency(us) 00:06:58.426 [2024-11-20T09:23:59.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.426 [2024-11-20T09:23:59.157Z] =================================================================================================================== 00:06:58.426 [2024-11-20T09:23:59.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:58.426 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3337141 00:06:58.426 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:58.683 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.942 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:58.942 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:59.201 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:59.201 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:59.201 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:59.201 [2024-11-20 10:23:59.905296] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:59.461 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:59.461 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:59.461 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:59.461 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.461 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:59.462 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:06:59.462 request: 00:06:59.462 { 00:06:59.462 "uuid": "41e89aad-b3ea-4b01-9874-5b87f10e145e", 00:06:59.462 "method": "bdev_lvol_get_lvstores", 00:06:59.462 "req_id": 1 00:06:59.462 } 00:06:59.462 Got JSON-RPC error response 00:06:59.462 response: 00:06:59.462 { 00:06:59.462 "code": -19, 00:06:59.462 "message": "No such device" 00:06:59.462 } 00:06:59.462 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:59.462 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.462 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.462 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.462 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:59.721 aio_bdev 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3dc5742d-a72f-4216-9c26-e384c31322aa 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3dc5742d-a72f-4216-9c26-e384c31322aa 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.721 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:59.980 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3dc5742d-a72f-4216-9c26-e384c31322aa -t 2000 00:07:00.240 [ 00:07:00.240 { 00:07:00.240 "name": "3dc5742d-a72f-4216-9c26-e384c31322aa", 00:07:00.240 "aliases": [ 00:07:00.240 "lvs/lvol" 00:07:00.240 ], 00:07:00.240 "product_name": "Logical Volume", 00:07:00.240 "block_size": 4096, 00:07:00.240 "num_blocks": 38912, 00:07:00.240 "uuid": "3dc5742d-a72f-4216-9c26-e384c31322aa", 00:07:00.240 "assigned_rate_limits": { 00:07:00.241 "rw_ios_per_sec": 0, 00:07:00.241 "rw_mbytes_per_sec": 0, 00:07:00.241 "r_mbytes_per_sec": 0, 00:07:00.241 "w_mbytes_per_sec": 0 00:07:00.241 }, 00:07:00.241 "claimed": false, 00:07:00.241 "zoned": false, 00:07:00.241 "supported_io_types": { 00:07:00.241 "read": true, 00:07:00.241 "write": true, 00:07:00.241 "unmap": true, 00:07:00.241 "flush": false, 00:07:00.241 "reset": true, 00:07:00.241 "nvme_admin": false, 00:07:00.241 "nvme_io": false, 00:07:00.241 "nvme_io_md": false, 00:07:00.241 "write_zeroes": true, 00:07:00.241 "zcopy": false, 00:07:00.241 "get_zone_info": false, 00:07:00.241 "zone_management": false, 00:07:00.241 "zone_append": false, 00:07:00.241 "compare": false, 00:07:00.241 "compare_and_write": false, 00:07:00.241 "abort": false, 00:07:00.241 "seek_hole": true, 00:07:00.241 "seek_data": true, 00:07:00.241 "copy": false, 00:07:00.241 "nvme_iov_md": false 00:07:00.241 }, 00:07:00.241 "driver_specific": { 00:07:00.241 "lvol": { 00:07:00.241 "lvol_store_uuid": "41e89aad-b3ea-4b01-9874-5b87f10e145e", 00:07:00.241 "base_bdev": "aio_bdev", 00:07:00.241 "thin_provision": false, 00:07:00.241 "num_allocated_clusters": 38, 00:07:00.241 "snapshot": false, 00:07:00.241 "clone": false, 00:07:00.241 "esnap_clone": false 00:07:00.241 } 00:07:00.241 } 00:07:00.241 } 00:07:00.241 ] 00:07:00.241 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:00.241 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:07:00.241 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:00.241 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:00.241 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:00.241 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:07:00.500 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:00.500 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3dc5742d-a72f-4216-9c26-e384c31322aa 00:07:00.760 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41e89aad-b3ea-4b01-9874-5b87f10e145e 00:07:01.019 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:01.019 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.019 00:07:01.019 real 0m15.672s 00:07:01.019 user 0m15.218s 00:07:01.019 sys 0m1.542s 00:07:01.019 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.019 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:01.019 ************************************ 00:07:01.019 END TEST lvs_grow_clean 00:07:01.019 ************************************ 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.278 ************************************ 00:07:01.278 START TEST lvs_grow_dirty 00:07:01.278 ************************************ 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.278 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:01.537 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:01.537 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:01.537 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:01.537 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:01.537 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:01.796 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:01.796 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:01.796 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc lvol 150 00:07:02.055 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:02.055 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.055 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:02.055 [2024-11-20 10:24:02.767896] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:02.055 [2024-11-20 10:24:02.767959] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:02.055 true 00:07:02.055 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:02.314 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:02.314 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:02.314 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:02.573 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:02.832 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:02.832 [2024-11-20 10:24:03.502109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.832 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3339870 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3339870 /var/tmp/bdevperf.sock 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3339870 ']' 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:03.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.091 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:03.091 [2024-11-20 10:24:03.750437] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:03.091 [2024-11-20 10:24:03.750484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339870 ] 00:07:03.358 [2024-11-20 10:24:03.824084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.358 [2024-11-20 10:24:03.865192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.358 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.358 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:03.358 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:03.620 Nvme0n1 00:07:03.620 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:03.879 [ 00:07:03.879 { 00:07:03.879 "name": "Nvme0n1", 00:07:03.879 "aliases": [ 00:07:03.879 "fce371d6-0ff3-4738-8fd2-60eb723438d7" 00:07:03.879 ], 00:07:03.879 "product_name": "NVMe disk", 00:07:03.879 "block_size": 4096, 00:07:03.879 "num_blocks": 38912, 00:07:03.879 "uuid": "fce371d6-0ff3-4738-8fd2-60eb723438d7", 00:07:03.879 "numa_id": 1, 00:07:03.879 "assigned_rate_limits": { 00:07:03.879 "rw_ios_per_sec": 0, 00:07:03.879 "rw_mbytes_per_sec": 0, 00:07:03.879 "r_mbytes_per_sec": 0, 00:07:03.879 "w_mbytes_per_sec": 0 00:07:03.879 }, 00:07:03.879 "claimed": false, 00:07:03.879 "zoned": false, 00:07:03.879 "supported_io_types": { 00:07:03.879 "read": true, 00:07:03.879 "write": true, 00:07:03.879 "unmap": true, 00:07:03.879 "flush": true, 00:07:03.879 "reset": true, 00:07:03.879 "nvme_admin": true, 00:07:03.879 "nvme_io": true, 00:07:03.879 "nvme_io_md": false, 00:07:03.879 "write_zeroes": true, 00:07:03.879 "zcopy": false, 00:07:03.879 "get_zone_info": false, 00:07:03.879 "zone_management": false, 00:07:03.879 "zone_append": false, 00:07:03.879 "compare": true, 00:07:03.879 "compare_and_write": true, 00:07:03.879 "abort": true, 00:07:03.879 "seek_hole": false, 00:07:03.879 "seek_data": false, 00:07:03.879 "copy": true, 00:07:03.879 "nvme_iov_md": false 00:07:03.879 }, 00:07:03.879 "memory_domains": [ 00:07:03.879 { 00:07:03.879 "dma_device_id": "system", 00:07:03.879 "dma_device_type": 1 00:07:03.879 } 00:07:03.879 ], 00:07:03.879 "driver_specific": { 00:07:03.879 "nvme": [ 00:07:03.879 { 00:07:03.879 "trid": { 00:07:03.879 "trtype": "TCP", 00:07:03.879 "adrfam": "IPv4", 00:07:03.879 "traddr": "10.0.0.2", 00:07:03.879 "trsvcid": "4420", 00:07:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:03.879 }, 00:07:03.879 "ctrlr_data": { 00:07:03.879 "cntlid": 1, 00:07:03.879 "vendor_id": "0x8086", 00:07:03.879 "model_number": "SPDK bdev Controller", 00:07:03.879 "serial_number": "SPDK0", 00:07:03.880 "firmware_revision": "25.01", 00:07:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:03.880 "oacs": { 00:07:03.880 "security": 0, 00:07:03.880 "format": 0, 00:07:03.880 "firmware": 0, 00:07:03.880 "ns_manage": 0 00:07:03.880 }, 00:07:03.880 "multi_ctrlr": true, 00:07:03.880 "ana_reporting": false 00:07:03.880 }, 00:07:03.880 "vs": { 00:07:03.880 "nvme_version": "1.3" 00:07:03.880 }, 00:07:03.880 "ns_data": { 00:07:03.880 "id": 1, 00:07:03.880 "can_share": true 00:07:03.880 } 00:07:03.880 } 00:07:03.880 ], 00:07:03.880 "mp_policy": "active_passive" 00:07:03.880 } 00:07:03.880 } 00:07:03.880 ] 00:07:03.880 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3340095 00:07:03.880 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:03.880 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:03.880 Running I/O for 10 seconds... 00:07:04.818 Latency(us) 00:07:04.818 [2024-11-20T09:24:05.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.818 Nvme0n1 : 1.00 21847.00 85.34 0.00 0.00 0.00 0.00 0.00 00:07:04.818 [2024-11-20T09:24:05.549Z] =================================================================================================================== 00:07:04.818 [2024-11-20T09:24:05.549Z] Total : 21847.00 85.34 0.00 0.00 0.00 0.00 0.00 00:07:04.818 00:07:05.754 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:06.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.014 Nvme0n1 : 2.00 21967.50 85.81 0.00 0.00 0.00 0.00 0.00 00:07:06.014 [2024-11-20T09:24:06.745Z] =================================================================================================================== 00:07:06.014 [2024-11-20T09:24:06.745Z] Total : 21967.50 85.81 0.00 0.00 0.00 0.00 0.00 00:07:06.014 00:07:06.014 true 00:07:06.014 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:06.014 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:06.273 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:06.273 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:06.273 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3340095 00:07:06.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.841 Nvme0n1 : 3.00 21999.67 85.94 0.00 0.00 0.00 0.00 0.00 00:07:06.841 [2024-11-20T09:24:07.572Z] =================================================================================================================== 00:07:06.841 [2024-11-20T09:24:07.572Z] Total : 21999.67 85.94 0.00 0.00 0.00 0.00 0.00 00:07:06.841 00:07:08.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.219 Nvme0n1 : 4.00 22037.75 86.08 0.00 0.00 0.00 0.00 0.00 00:07:08.219 [2024-11-20T09:24:08.950Z] =================================================================================================================== 00:07:08.219 [2024-11-20T09:24:08.950Z] Total : 22037.75 86.08 0.00 0.00 0.00 0.00 0.00 00:07:08.219 00:07:09.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.156 Nvme0n1 : 5.00 22078.20 86.24 0.00 0.00 0.00 0.00 0.00 00:07:09.156 [2024-11-20T09:24:09.887Z] =================================================================================================================== 00:07:09.156 [2024-11-20T09:24:09.887Z] Total : 22078.20 86.24 0.00 0.00 0.00 0.00 0.00 00:07:09.156 00:07:10.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.093 Nvme0n1 : 6.00 22085.17 86.27 0.00 0.00 0.00 0.00 0.00 00:07:10.093 [2024-11-20T09:24:10.824Z] =================================================================================================================== 00:07:10.093 [2024-11-20T09:24:10.824Z] Total : 22085.17 86.27 0.00 0.00 0.00 0.00 0.00 00:07:10.093 00:07:11.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.030 Nvme0n1 : 7.00 22121.00 86.41 0.00 0.00 0.00 0.00 0.00 00:07:11.030 [2024-11-20T09:24:11.761Z] =================================================================================================================== 00:07:11.030 [2024-11-20T09:24:11.761Z] Total : 22121.00 86.41 0.00 0.00 0.00 0.00 0.00 00:07:11.030 00:07:11.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.966 Nvme0n1 : 8.00 22153.88 86.54 0.00 0.00 0.00 0.00 0.00 00:07:11.966 [2024-11-20T09:24:12.697Z] =================================================================================================================== 00:07:11.966 [2024-11-20T09:24:12.697Z] Total : 22153.88 86.54 0.00 0.00 0.00 0.00 0.00 00:07:11.966 00:07:12.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.903 Nvme0n1 : 9.00 22183.89 86.66 0.00 0.00 0.00 0.00 0.00 00:07:12.903 [2024-11-20T09:24:13.634Z] =================================================================================================================== 00:07:12.903 [2024-11-20T09:24:13.634Z] Total : 22183.89 86.66 0.00 0.00 0.00 0.00 0.00 00:07:12.903 00:07:13.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.841 Nvme0n1 : 10.00 22204.70 86.74 0.00 0.00 0.00 0.00 0.00 00:07:13.841 [2024-11-20T09:24:14.572Z] =================================================================================================================== 00:07:13.841 [2024-11-20T09:24:14.572Z] Total : 22204.70 86.74 0.00 0.00 0.00 0.00 0.00 00:07:13.841 00:07:13.841 00:07:13.841 Latency(us) 00:07:13.841 [2024-11-20T09:24:14.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.841 Nvme0n1 : 10.01 22205.32 86.74 0.00 0.00 5760.18 4416.56 10770.70 00:07:13.841 [2024-11-20T09:24:14.572Z] =================================================================================================================== 00:07:13.841 [2024-11-20T09:24:14.572Z] Total : 22205.32 86.74 0.00 0.00 5760.18 4416.56 10770.70 00:07:13.841 { 00:07:13.841 "results": [ 00:07:13.841 { 00:07:13.841 "job": "Nvme0n1", 00:07:13.841 "core_mask": "0x2", 00:07:13.841 "workload": "randwrite", 00:07:13.841 "status": "finished", 00:07:13.841 "queue_depth": 128, 00:07:13.841 "io_size": 4096, 00:07:13.841 "runtime": 10.005487, 00:07:13.841 "iops": 22205.315943142, 00:07:13.841 "mibps": 86.73951540289843, 00:07:13.841 "io_failed": 0, 00:07:13.841 "io_timeout": 0, 00:07:13.841 "avg_latency_us": 5760.177228283619, 00:07:13.841 "min_latency_us": 4416.55652173913, 00:07:13.841 "max_latency_us": 10770.699130434783 00:07:13.841 } 00:07:13.841 ], 00:07:13.841 "core_count": 1 00:07:13.841 } 00:07:13.841 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3339870 00:07:13.841 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3339870 ']' 00:07:13.841 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3339870 00:07:13.841 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:13.841 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339870 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339870' 00:07:14.101 killing process with pid 3339870 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3339870 00:07:14.101 Received shutdown signal, test time was about 10.000000 seconds 00:07:14.101 00:07:14.101 Latency(us) 00:07:14.101 [2024-11-20T09:24:14.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.101 [2024-11-20T09:24:14.832Z] =================================================================================================================== 00:07:14.101 [2024-11-20T09:24:14.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3339870 00:07:14.101 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.360 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:14.620 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:14.620 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3336641 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3336641 00:07:14.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3336641 Killed "${NVMF_APP[@]}" "$@" 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3342341 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3342341 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:14.879 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3342341 ']' 00:07:14.880 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.880 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.880 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.880 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.880 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:14.880 [2024-11-20 10:24:15.497029] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:14.880 [2024-11-20 10:24:15.497079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.880 [2024-11-20 10:24:15.577519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.139 [2024-11-20 10:24:15.619232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.139 [2024-11-20 10:24:15.619265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.139 [2024-11-20 10:24:15.619273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.139 [2024-11-20 10:24:15.619278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.139 [2024-11-20 10:24:15.619284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.139 [2024-11-20 10:24:15.619852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.139 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:15.398 [2024-11-20 10:24:15.921724] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:15.398 [2024-11-20 10:24:15.921801] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:15.398 [2024-11-20 10:24:15.921826] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.398 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:15.658 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fce371d6-0ff3-4738-8fd2-60eb723438d7 -t 2000 00:07:15.658 [ 00:07:15.658 { 00:07:15.658 "name": "fce371d6-0ff3-4738-8fd2-60eb723438d7", 00:07:15.658 "aliases": [ 00:07:15.658 "lvs/lvol" 00:07:15.658 ], 00:07:15.658 "product_name": "Logical Volume", 00:07:15.658 "block_size": 4096, 00:07:15.658 "num_blocks": 38912, 00:07:15.658 "uuid": "fce371d6-0ff3-4738-8fd2-60eb723438d7", 00:07:15.658 "assigned_rate_limits": { 00:07:15.658 "rw_ios_per_sec": 0, 00:07:15.658 "rw_mbytes_per_sec": 0, 00:07:15.658 "r_mbytes_per_sec": 0, 00:07:15.658 "w_mbytes_per_sec": 0 00:07:15.658 }, 00:07:15.658 "claimed": false, 00:07:15.658 "zoned": false, 00:07:15.658 "supported_io_types": { 00:07:15.658 "read": true, 00:07:15.658 "write": true, 00:07:15.658 "unmap": true, 00:07:15.658 "flush": false, 00:07:15.658 "reset": true, 00:07:15.658 "nvme_admin": false, 00:07:15.658 "nvme_io": false, 00:07:15.658 "nvme_io_md": false, 00:07:15.658 "write_zeroes": true, 00:07:15.658 "zcopy": false, 00:07:15.658 "get_zone_info": false, 00:07:15.658 "zone_management": false, 00:07:15.658 "zone_append": false, 00:07:15.658 "compare": false, 00:07:15.658 "compare_and_write": false, 00:07:15.658 "abort": false, 00:07:15.658 "seek_hole": true, 00:07:15.658 "seek_data": true, 00:07:15.658 "copy": false, 00:07:15.658 "nvme_iov_md": false 00:07:15.658 }, 00:07:15.658 "driver_specific": { 00:07:15.658 "lvol": { 00:07:15.658 "lvol_store_uuid": "4cb8cbea-1c73-478b-8f6b-bdbc42875cbc", 00:07:15.658 "base_bdev": "aio_bdev", 00:07:15.658 "thin_provision": false, 00:07:15.658 "num_allocated_clusters": 38, 00:07:15.658 "snapshot": false, 00:07:15.658 "clone": false, 00:07:15.658 "esnap_clone": false 00:07:15.658 } 00:07:15.658 } 00:07:15.658 } 00:07:15.658 ] 00:07:15.658 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:15.658 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:15.658 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:15.917 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:15.917 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:15.917 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:16.177 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:16.177 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.177 [2024-11-20 10:24:16.894582] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.436 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:16.436 request: 00:07:16.436 { 00:07:16.436 "uuid": "4cb8cbea-1c73-478b-8f6b-bdbc42875cbc", 00:07:16.436 "method": "bdev_lvol_get_lvstores", 00:07:16.436 "req_id": 1 00:07:16.436 } 00:07:16.436 Got JSON-RPC error response 00:07:16.436 response: 00:07:16.436 { 00:07:16.436 "code": -19, 00:07:16.436 "message": "No such device" 00:07:16.436 } 00:07:16.436 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:16.436 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.436 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.436 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.437 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:16.696 aio_bdev 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.696 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:16.955 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fce371d6-0ff3-4738-8fd2-60eb723438d7 -t 2000 00:07:16.955 [ 00:07:16.955 { 00:07:16.955 "name": "fce371d6-0ff3-4738-8fd2-60eb723438d7", 00:07:16.955 "aliases": [ 00:07:16.955 "lvs/lvol" 00:07:16.955 ], 00:07:16.955 "product_name": "Logical Volume", 00:07:16.955 "block_size": 4096, 00:07:16.955 "num_blocks": 38912, 00:07:16.955 "uuid": "fce371d6-0ff3-4738-8fd2-60eb723438d7", 00:07:16.955 "assigned_rate_limits": { 00:07:16.955 "rw_ios_per_sec": 0, 00:07:16.955 "rw_mbytes_per_sec": 0, 00:07:16.955 "r_mbytes_per_sec": 0, 00:07:16.955 "w_mbytes_per_sec": 0 00:07:16.955 }, 00:07:16.955 "claimed": false, 00:07:16.955 "zoned": false, 00:07:16.955 "supported_io_types": { 00:07:16.955 "read": true, 00:07:16.955 "write": true, 00:07:16.955 "unmap": true, 00:07:16.955 "flush": false, 00:07:16.955 "reset": true, 00:07:16.955 "nvme_admin": false, 00:07:16.955 "nvme_io": false, 00:07:16.955 "nvme_io_md": false, 00:07:16.955 "write_zeroes": true, 00:07:16.955 "zcopy": false, 00:07:16.955 "get_zone_info": false, 00:07:16.955 "zone_management": false, 00:07:16.955 "zone_append": false, 00:07:16.955 "compare": false, 00:07:16.955 "compare_and_write": false, 00:07:16.955 "abort": false, 00:07:16.955 "seek_hole": true, 00:07:16.955 "seek_data": true, 00:07:16.955 "copy": false, 00:07:16.955 "nvme_iov_md": false 00:07:16.955 }, 00:07:16.955 "driver_specific": { 00:07:16.955 "lvol": { 00:07:16.955 "lvol_store_uuid": "4cb8cbea-1c73-478b-8f6b-bdbc42875cbc", 00:07:16.955 "base_bdev": "aio_bdev", 00:07:16.955 "thin_provision": false, 00:07:16.955 "num_allocated_clusters": 38, 00:07:16.955 "snapshot": false, 00:07:16.955 "clone": false, 00:07:16.955 "esnap_clone": false 00:07:16.955 } 00:07:16.955 } 00:07:16.955 } 00:07:16.955 ] 00:07:17.214 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:17.214 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:17.214 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:17.214 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:17.214 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:17.214 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:17.473 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:17.473 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fce371d6-0ff3-4738-8fd2-60eb723438d7 00:07:17.733 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cb8cbea-1c73-478b-8f6b-bdbc42875cbc 00:07:17.992 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:17.992 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.992 00:07:17.992 real 0m16.917s 00:07:17.992 user 0m43.539s 00:07:17.992 sys 0m3.994s 00:07:17.992 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.992 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 ************************************ 00:07:17.992 END TEST lvs_grow_dirty 00:07:17.992 ************************************ 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:18.252 nvmf_trace.0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.252 rmmod nvme_tcp 00:07:18.252 rmmod nvme_fabrics 00:07:18.252 rmmod nvme_keyring 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3342341 ']' 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3342341 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3342341 ']' 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3342341 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3342341 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.252 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.253 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3342341' 00:07:18.253 killing process with pid 3342341 00:07:18.253 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3342341 00:07:18.253 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3342341 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.618 10:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:20.547 00:07:20.547 real 0m41.896s 00:07:20.547 user 1m4.485s 00:07:20.547 sys 0m10.472s 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.547 ************************************ 00:07:20.547 END TEST nvmf_lvs_grow 00:07:20.547 ************************************ 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.547 ************************************ 00:07:20.547 START TEST nvmf_bdev_io_wait 00:07:20.547 ************************************ 00:07:20.547 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:20.806 * Looking for test storage... 00:07:20.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:20.806 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.807 --rc genhtml_branch_coverage=1 00:07:20.807 --rc genhtml_function_coverage=1 00:07:20.807 --rc genhtml_legend=1 00:07:20.807 --rc geninfo_all_blocks=1 00:07:20.807 --rc geninfo_unexecuted_blocks=1 00:07:20.807 00:07:20.807 ' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.807 --rc genhtml_branch_coverage=1 00:07:20.807 --rc genhtml_function_coverage=1 00:07:20.807 --rc genhtml_legend=1 00:07:20.807 --rc geninfo_all_blocks=1 00:07:20.807 --rc geninfo_unexecuted_blocks=1 00:07:20.807 00:07:20.807 ' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.807 --rc genhtml_branch_coverage=1 00:07:20.807 --rc genhtml_function_coverage=1 00:07:20.807 --rc genhtml_legend=1 00:07:20.807 --rc geninfo_all_blocks=1 00:07:20.807 --rc geninfo_unexecuted_blocks=1 00:07:20.807 00:07:20.807 ' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.807 --rc genhtml_branch_coverage=1 00:07:20.807 --rc genhtml_function_coverage=1 00:07:20.807 --rc genhtml_legend=1 00:07:20.807 --rc geninfo_all_blocks=1 00:07:20.807 --rc geninfo_unexecuted_blocks=1 00:07:20.807 00:07:20.807 ' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.807 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:27.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:27.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:27.377 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:27.378 Found net devices under 0000:86:00.0: cvl_0_0 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:27.378 Found net devices under 0000:86:00.1: cvl_0_1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:27.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:07:27.378 00:07:27.378 --- 10.0.0.2 ping statistics --- 00:07:27.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.378 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:07:27.378 00:07:27.378 --- 10.0.0.1 ping statistics --- 00:07:27.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.378 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3346404 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3346404 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3346404 ']' 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 [2024-11-20 10:24:27.504419] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:27.378 [2024-11-20 10:24:27.504476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.378 [2024-11-20 10:24:27.585335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.378 [2024-11-20 10:24:27.628143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.378 [2024-11-20 10:24:27.628184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.378 [2024-11-20 10:24:27.628192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.378 [2024-11-20 10:24:27.628198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.378 [2024-11-20 10:24:27.628203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.378 [2024-11-20 10:24:27.629789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.378 [2024-11-20 10:24:27.629896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.378 [2024-11-20 10:24:27.630007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.378 [2024-11-20 10:24:27.630006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.379 [2024-11-20 10:24:27.779072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.379 Malloc0 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.379 [2024-11-20 10:24:27.834772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3346598 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3346601 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.379 { 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme$subsystem", 00:07:27.379 "trtype": "$TEST_TRANSPORT", 00:07:27.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "$NVMF_PORT", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.379 "hdgst": ${hdgst:-false}, 00:07:27.379 "ddgst": ${ddgst:-false} 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 } 00:07:27.379 EOF 00:07:27.379 )") 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3346604 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.379 { 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme$subsystem", 00:07:27.379 "trtype": "$TEST_TRANSPORT", 00:07:27.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "$NVMF_PORT", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.379 "hdgst": ${hdgst:-false}, 00:07:27.379 "ddgst": ${ddgst:-false} 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 } 00:07:27.379 EOF 00:07:27.379 )") 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3346608 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.379 { 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme$subsystem", 00:07:27.379 "trtype": "$TEST_TRANSPORT", 00:07:27.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "$NVMF_PORT", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.379 "hdgst": ${hdgst:-false}, 00:07:27.379 "ddgst": ${ddgst:-false} 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 } 00:07:27.379 EOF 00:07:27.379 )") 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.379 { 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme$subsystem", 00:07:27.379 "trtype": "$TEST_TRANSPORT", 00:07:27.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "$NVMF_PORT", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.379 "hdgst": ${hdgst:-false}, 00:07:27.379 "ddgst": ${ddgst:-false} 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 } 00:07:27.379 EOF 00:07:27.379 )") 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3346598 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme1", 00:07:27.379 "trtype": "tcp", 00:07:27.379 "traddr": "10.0.0.2", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "4420", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.379 "hdgst": false, 00:07:27.379 "ddgst": false 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 }' 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme1", 00:07:27.379 "trtype": "tcp", 00:07:27.379 "traddr": "10.0.0.2", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "4420", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.379 "hdgst": false, 00:07:27.379 "ddgst": false 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 }' 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme1", 00:07:27.379 "trtype": "tcp", 00:07:27.379 "traddr": "10.0.0.2", 00:07:27.379 "adrfam": "ipv4", 00:07:27.379 "trsvcid": "4420", 00:07:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.379 "hdgst": false, 00:07:27.379 "ddgst": false 00:07:27.379 }, 00:07:27.379 "method": "bdev_nvme_attach_controller" 00:07:27.379 }' 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.379 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.379 "params": { 00:07:27.379 "name": "Nvme1", 00:07:27.379 "trtype": "tcp", 00:07:27.379 "traddr": "10.0.0.2", 00:07:27.380 "adrfam": "ipv4", 00:07:27.380 "trsvcid": "4420", 00:07:27.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.380 "hdgst": false, 00:07:27.380 "ddgst": false 00:07:27.380 }, 00:07:27.380 "method": "bdev_nvme_attach_controller" 00:07:27.380 }' 00:07:27.380 [2024-11-20 10:24:27.887418] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:27.380 [2024-11-20 10:24:27.887467] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:27.380 [2024-11-20 10:24:27.887632] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:27.380 [2024-11-20 10:24:27.887674] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:27.380 [2024-11-20 10:24:27.889905] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:27.380 [2024-11-20 10:24:27.889934] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:27.380 [2024-11-20 10:24:27.889955] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:27.380 [2024-11-20 10:24:27.889980] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:27.380 [2024-11-20 10:24:28.085302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.638 [2024-11-20 10:24:28.128361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.638 [2024-11-20 10:24:28.177545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.638 [2024-11-20 10:24:28.231571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:27.638 [2024-11-20 10:24:28.236390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.638 [2024-11-20 10:24:28.279873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:27.638 [2024-11-20 10:24:28.288550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.638 [2024-11-20 10:24:28.331510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:27.896 Running I/O for 1 seconds... 00:07:27.896 Running I/O for 1 seconds... 00:07:27.896 Running I/O for 1 seconds... 00:07:27.896 Running I/O for 1 seconds... 00:07:28.830 238736.00 IOPS, 932.56 MiB/s 00:07:28.830 Latency(us) 00:07:28.830 [2024-11-20T09:24:29.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.831 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:28.831 Nvme1n1 : 1.00 238362.14 931.10 0.00 0.00 534.73 229.73 1567.17 00:07:28.831 [2024-11-20T09:24:29.562Z] =================================================================================================================== 00:07:28.831 [2024-11-20T09:24:29.562Z] Total : 238362.14 931.10 0.00 0.00 534.73 229.73 1567.17 00:07:28.831 7910.00 IOPS, 30.90 MiB/s 00:07:28.831 Latency(us) 00:07:28.831 [2024-11-20T09:24:29.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.831 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:28.831 Nvme1n1 : 1.02 7903.11 30.87 0.00 0.00 16098.58 6753.06 29405.72 00:07:28.831 [2024-11-20T09:24:29.562Z] =================================================================================================================== 00:07:28.831 [2024-11-20T09:24:29.562Z] Total : 7903.11 30.87 0.00 0.00 16098.58 6753.06 29405.72 00:07:28.831 12563.00 IOPS, 49.07 MiB/s 00:07:28.831 Latency(us) 00:07:28.831 [2024-11-20T09:24:29.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.831 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:28.831 Nvme1n1 : 1.01 12624.24 49.31 0.00 0.00 10108.10 4331.07 21199.47 00:07:28.831 [2024-11-20T09:24:29.562Z] =================================================================================================================== 00:07:28.831 [2024-11-20T09:24:29.562Z] Total : 12624.24 49.31 0.00 0.00 10108.10 4331.07 21199.47 00:07:29.090 7680.00 IOPS, 30.00 MiB/s 00:07:29.090 Latency(us) 00:07:29.090 [2024-11-20T09:24:29.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.090 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:29.090 Nvme1n1 : 1.01 7802.71 30.48 0.00 0.00 16363.27 3761.20 40119.43 00:07:29.090 [2024-11-20T09:24:29.821Z] =================================================================================================================== 00:07:29.090 [2024-11-20T09:24:29.821Z] Total : 7802.71 30.48 0.00 0.00 16363.27 3761.20 40119.43 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3346601 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3346604 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3346608 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.090 rmmod nvme_tcp 00:07:29.090 rmmod nvme_fabrics 00:07:29.090 rmmod nvme_keyring 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3346404 ']' 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3346404 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3346404 ']' 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3346404 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.090 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346404 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346404' 00:07:29.350 killing process with pid 3346404 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3346404 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3346404 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.350 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.888 00:07:31.888 real 0m10.833s 00:07:31.888 user 0m16.092s 00:07:31.888 sys 0m6.235s 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.888 ************************************ 00:07:31.888 END TEST nvmf_bdev_io_wait 00:07:31.888 ************************************ 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.888 ************************************ 00:07:31.888 START TEST nvmf_queue_depth 00:07:31.888 ************************************ 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:31.888 * Looking for test storage... 00:07:31.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.888 --rc genhtml_branch_coverage=1 00:07:31.888 --rc genhtml_function_coverage=1 00:07:31.888 --rc genhtml_legend=1 00:07:31.888 --rc geninfo_all_blocks=1 00:07:31.888 --rc geninfo_unexecuted_blocks=1 00:07:31.888 00:07:31.888 ' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.888 --rc genhtml_branch_coverage=1 00:07:31.888 --rc genhtml_function_coverage=1 00:07:31.888 --rc genhtml_legend=1 00:07:31.888 --rc geninfo_all_blocks=1 00:07:31.888 --rc geninfo_unexecuted_blocks=1 00:07:31.888 00:07:31.888 ' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.888 --rc genhtml_branch_coverage=1 00:07:31.888 --rc genhtml_function_coverage=1 00:07:31.888 --rc genhtml_legend=1 00:07:31.888 --rc geninfo_all_blocks=1 00:07:31.888 --rc geninfo_unexecuted_blocks=1 00:07:31.888 00:07:31.888 ' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.888 --rc genhtml_branch_coverage=1 00:07:31.888 --rc genhtml_function_coverage=1 00:07:31.888 --rc genhtml_legend=1 00:07:31.888 --rc geninfo_all_blocks=1 00:07:31.888 --rc geninfo_unexecuted_blocks=1 00:07:31.888 00:07:31.888 ' 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.888 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.889 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:38.465 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:38.465 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:38.465 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:38.466 Found net devices under 0000:86:00.0: cvl_0_0 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:38.466 Found net devices under 0000:86:00.1: cvl_0_1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:07:38.466 00:07:38.466 --- 10.0.0.2 ping statistics --- 00:07:38.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.466 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:38.466 00:07:38.466 --- 10.0.0.1 ping statistics --- 00:07:38.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.466 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3350440 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3350440 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3350440 ']' 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.466 [2024-11-20 10:24:38.350263] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:38.466 [2024-11-20 10:24:38.350317] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.466 [2024-11-20 10:24:38.434088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.466 [2024-11-20 10:24:38.475941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.466 [2024-11-20 10:24:38.475982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.466 [2024-11-20 10:24:38.475990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.466 [2024-11-20 10:24:38.475996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.466 [2024-11-20 10:24:38.476001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.466 [2024-11-20 10:24:38.476556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.466 [2024-11-20 10:24:38.612464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.466 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.467 Malloc0 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.467 [2024-11-20 10:24:38.662557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3350466 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3350466 /var/tmp/bdevperf.sock 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3350466 ']' 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.467 [2024-11-20 10:24:38.715835] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:07:38.467 [2024-11-20 10:24:38.715878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350466 ] 00:07:38.467 [2024-11-20 10:24:38.791104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.467 [2024-11-20 10:24:38.834784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.467 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.467 NVMe0n1 00:07:38.467 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.467 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.467 Running I/O for 10 seconds... 00:07:40.781 11278.00 IOPS, 44.05 MiB/s [2024-11-20T09:24:42.448Z] 11773.50 IOPS, 45.99 MiB/s [2024-11-20T09:24:43.385Z] 11940.67 IOPS, 46.64 MiB/s [2024-11-20T09:24:44.322Z] 12020.25 IOPS, 46.95 MiB/s [2024-11-20T09:24:45.259Z] 12071.40 IOPS, 47.15 MiB/s [2024-11-20T09:24:46.333Z] 12103.50 IOPS, 47.28 MiB/s [2024-11-20T09:24:47.269Z] 12132.14 IOPS, 47.39 MiB/s [2024-11-20T09:24:48.204Z] 12147.62 IOPS, 47.45 MiB/s [2024-11-20T09:24:49.580Z] 12163.11 IOPS, 47.51 MiB/s [2024-11-20T09:24:49.580Z] 12171.60 IOPS, 47.55 MiB/s 00:07:48.849 Latency(us) 00:07:48.849 [2024-11-20T09:24:49.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.849 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:48.849 Verification LBA range: start 0x0 length 0x4000 00:07:48.849 NVMe0n1 : 10.06 12206.62 47.68 0.00 0.00 83626.68 16982.37 54480.36 00:07:48.849 [2024-11-20T09:24:49.580Z] =================================================================================================================== 00:07:48.849 [2024-11-20T09:24:49.580Z] Total : 12206.62 47.68 0.00 0.00 83626.68 16982.37 54480.36 00:07:48.849 { 00:07:48.849 "results": [ 00:07:48.849 { 00:07:48.849 "job": "NVMe0n1", 00:07:48.849 "core_mask": "0x1", 00:07:48.849 "workload": "verify", 00:07:48.849 "status": "finished", 00:07:48.849 "verify_range": { 00:07:48.849 "start": 0, 00:07:48.849 "length": 16384 00:07:48.849 }, 00:07:48.849 "queue_depth": 1024, 00:07:48.849 "io_size": 4096, 00:07:48.849 "runtime": 10.055199, 00:07:48.849 "iops": 12206.620674538613, 00:07:48.849 "mibps": 47.68211200991646, 00:07:48.849 "io_failed": 0, 00:07:48.849 "io_timeout": 0, 00:07:48.849 "avg_latency_us": 83626.67999586258, 00:07:48.849 "min_latency_us": 16982.372173913045, 00:07:48.849 "max_latency_us": 54480.361739130436 00:07:48.849 } 00:07:48.849 ], 00:07:48.849 "core_count": 1 00:07:48.849 } 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3350466 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3350466 ']' 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3350466 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350466 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350466' 00:07:48.849 killing process with pid 3350466 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3350466 00:07:48.849 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.849 00:07:48.849 Latency(us) 00:07:48.849 [2024-11-20T09:24:49.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.849 [2024-11-20T09:24:49.580Z] =================================================================================================================== 00:07:48.849 [2024-11-20T09:24:49.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3350466 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:48.849 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.850 rmmod nvme_tcp 00:07:48.850 rmmod nvme_fabrics 00:07:48.850 rmmod nvme_keyring 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3350440 ']' 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3350440 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3350440 ']' 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3350440 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350440 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350440' 00:07:48.850 killing process with pid 3350440 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3350440 00:07:48.850 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3350440 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.109 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.645 00:07:51.645 real 0m19.705s 00:07:51.645 user 0m22.945s 00:07:51.645 sys 0m6.183s 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.645 ************************************ 00:07:51.645 END TEST nvmf_queue_depth 00:07:51.645 ************************************ 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.645 ************************************ 00:07:51.645 START TEST nvmf_target_multipath 00:07:51.645 ************************************ 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.645 * Looking for test storage... 00:07:51.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.645 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.645 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.645 --rc genhtml_branch_coverage=1 00:07:51.645 --rc genhtml_function_coverage=1 00:07:51.645 --rc genhtml_legend=1 00:07:51.646 --rc geninfo_all_blocks=1 00:07:51.646 --rc geninfo_unexecuted_blocks=1 00:07:51.646 00:07:51.646 ' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.646 --rc genhtml_branch_coverage=1 00:07:51.646 --rc genhtml_function_coverage=1 00:07:51.646 --rc genhtml_legend=1 00:07:51.646 --rc geninfo_all_blocks=1 00:07:51.646 --rc geninfo_unexecuted_blocks=1 00:07:51.646 00:07:51.646 ' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.646 --rc genhtml_branch_coverage=1 00:07:51.646 --rc genhtml_function_coverage=1 00:07:51.646 --rc genhtml_legend=1 00:07:51.646 --rc geninfo_all_blocks=1 00:07:51.646 --rc geninfo_unexecuted_blocks=1 00:07:51.646 00:07:51.646 ' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.646 --rc genhtml_branch_coverage=1 00:07:51.646 --rc genhtml_function_coverage=1 00:07:51.646 --rc genhtml_legend=1 00:07:51.646 --rc geninfo_all_blocks=1 00:07:51.646 --rc geninfo_unexecuted_blocks=1 00:07:51.646 00:07:51.646 ' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.646 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:58.219 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:58.219 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:58.219 Found net devices under 0000:86:00.0: cvl_0_0 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:58.219 Found net devices under 0000:86:00.1: cvl_0_1 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.219 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.219 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.219 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.219 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.219 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:07:58.219 00:07:58.219 --- 10.0.0.2 ping statistics --- 00:07:58.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.219 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:58.220 00:07:58.220 --- 10.0.0.1 ping statistics --- 00:07:58.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.220 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:58.220 only one NIC for nvmf test 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.220 rmmod nvme_tcp 00:07:58.220 rmmod nvme_fabrics 00:07:58.220 rmmod nvme_keyring 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.220 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.597 00:07:59.597 real 0m8.410s 00:07:59.597 user 0m1.884s 00:07:59.597 sys 0m4.547s 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.597 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:59.597 ************************************ 00:07:59.597 END TEST nvmf_target_multipath 00:07:59.597 ************************************ 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.856 ************************************ 00:07:59.856 START TEST nvmf_zcopy 00:07:59.856 ************************************ 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:59.856 * Looking for test storage... 00:07:59.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:59.856 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.857 --rc genhtml_branch_coverage=1 00:07:59.857 --rc genhtml_function_coverage=1 00:07:59.857 --rc genhtml_legend=1 00:07:59.857 --rc geninfo_all_blocks=1 00:07:59.857 --rc geninfo_unexecuted_blocks=1 00:07:59.857 00:07:59.857 ' 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.857 --rc genhtml_branch_coverage=1 00:07:59.857 --rc genhtml_function_coverage=1 00:07:59.857 --rc genhtml_legend=1 00:07:59.857 --rc geninfo_all_blocks=1 00:07:59.857 --rc geninfo_unexecuted_blocks=1 00:07:59.857 00:07:59.857 ' 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.857 --rc genhtml_branch_coverage=1 00:07:59.857 --rc genhtml_function_coverage=1 00:07:59.857 --rc genhtml_legend=1 00:07:59.857 --rc geninfo_all_blocks=1 00:07:59.857 --rc geninfo_unexecuted_blocks=1 00:07:59.857 00:07:59.857 ' 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.857 --rc genhtml_branch_coverage=1 00:07:59.857 --rc genhtml_function_coverage=1 00:07:59.857 --rc genhtml_legend=1 00:07:59.857 --rc geninfo_all_blocks=1 00:07:59.857 --rc geninfo_unexecuted_blocks=1 00:07:59.857 00:07:59.857 ' 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.857 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.117 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:06.690 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:06.690 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:06.690 Found net devices under 0000:86:00.0: cvl_0_0 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:06.690 Found net devices under 0000:86:00.1: cvl_0_1 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.690 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:08:06.691 00:08:06.691 --- 10.0.0.2 ping statistics --- 00:08:06.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.691 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:06.691 00:08:06.691 --- 10.0.0.1 ping statistics --- 00:08:06.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.691 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3359384 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3359384 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3359384 ']' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 [2024-11-20 10:25:06.621456] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:08:06.691 [2024-11-20 10:25:06.621502] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.691 [2024-11-20 10:25:06.699994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.691 [2024-11-20 10:25:06.739396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.691 [2024-11-20 10:25:06.739433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.691 [2024-11-20 10:25:06.739441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.691 [2024-11-20 10:25:06.739448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.691 [2024-11-20 10:25:06.739452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.691 [2024-11-20 10:25:06.740023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 [2024-11-20 10:25:06.887238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 [2024-11-20 10:25:06.907456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.691 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.692 malloc0 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.692 { 00:08:06.692 "params": { 00:08:06.692 "name": "Nvme$subsystem", 00:08:06.692 "trtype": "$TEST_TRANSPORT", 00:08:06.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.692 "adrfam": "ipv4", 00:08:06.692 "trsvcid": "$NVMF_PORT", 00:08:06.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.692 "hdgst": ${hdgst:-false}, 00:08:06.692 "ddgst": ${ddgst:-false} 00:08:06.692 }, 00:08:06.692 "method": "bdev_nvme_attach_controller" 00:08:06.692 } 00:08:06.692 EOF 00:08:06.692 )") 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:06.692 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.692 "params": { 00:08:06.692 "name": "Nvme1", 00:08:06.692 "trtype": "tcp", 00:08:06.692 "traddr": "10.0.0.2", 00:08:06.692 "adrfam": "ipv4", 00:08:06.692 "trsvcid": "4420", 00:08:06.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.692 "hdgst": false, 00:08:06.692 "ddgst": false 00:08:06.692 }, 00:08:06.692 "method": "bdev_nvme_attach_controller" 00:08:06.692 }' 00:08:06.692 [2024-11-20 10:25:06.987619] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:08:06.692 [2024-11-20 10:25:06.987665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3359604 ] 00:08:06.692 [2024-11-20 10:25:07.060509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.692 [2024-11-20 10:25:07.102207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.692 Running I/O for 10 seconds... 00:08:09.009 8478.00 IOPS, 66.23 MiB/s [2024-11-20T09:25:10.678Z] 8535.00 IOPS, 66.68 MiB/s [2024-11-20T09:25:11.616Z] 8525.00 IOPS, 66.60 MiB/s [2024-11-20T09:25:12.552Z] 8509.75 IOPS, 66.48 MiB/s [2024-11-20T09:25:13.490Z] 8521.00 IOPS, 66.57 MiB/s [2024-11-20T09:25:14.428Z] 8536.17 IOPS, 66.69 MiB/s [2024-11-20T09:25:15.365Z] 8538.00 IOPS, 66.70 MiB/s [2024-11-20T09:25:16.741Z] 8547.12 IOPS, 66.77 MiB/s [2024-11-20T09:25:17.678Z] 8552.67 IOPS, 66.82 MiB/s [2024-11-20T09:25:17.678Z] 8559.10 IOPS, 66.87 MiB/s 00:08:16.947 Latency(us) 00:08:16.947 [2024-11-20T09:25:17.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.947 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:16.947 Verification LBA range: start 0x0 length 0x1000 00:08:16.947 Nvme1n1 : 10.01 8559.48 66.87 0.00 0.00 14910.68 1146.88 23137.06 00:08:16.947 [2024-11-20T09:25:17.678Z] =================================================================================================================== 00:08:16.947 [2024-11-20T09:25:17.678Z] Total : 8559.48 66.87 0.00 0.00 14910.68 1146.88 23137.06 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3361223 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.947 { 00:08:16.947 "params": { 00:08:16.947 "name": "Nvme$subsystem", 00:08:16.947 "trtype": "$TEST_TRANSPORT", 00:08:16.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.947 "adrfam": "ipv4", 00:08:16.947 "trsvcid": "$NVMF_PORT", 00:08:16.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.947 "hdgst": ${hdgst:-false}, 00:08:16.947 "ddgst": ${ddgst:-false} 00:08:16.947 }, 00:08:16.947 "method": "bdev_nvme_attach_controller" 00:08:16.947 } 00:08:16.947 EOF 00:08:16.947 )") 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:16.947 [2024-11-20 10:25:17.503635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.947 [2024-11-20 10:25:17.503666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:16.947 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:16.947 [2024-11-20 10:25:17.511621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.511635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.948 "params": { 00:08:16.948 "name": "Nvme1", 00:08:16.948 "trtype": "tcp", 00:08:16.948 "traddr": "10.0.0.2", 00:08:16.948 "adrfam": "ipv4", 00:08:16.948 "trsvcid": "4420", 00:08:16.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.948 "hdgst": false, 00:08:16.948 "ddgst": false 00:08:16.948 }, 00:08:16.948 "method": "bdev_nvme_attach_controller" 00:08:16.948 }' 00:08:16.948 [2024-11-20 10:25:17.519636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.519646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.527658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.527668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.535680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.535690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.544688] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:08:16.948 [2024-11-20 10:25:17.544730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3361223 ] 00:08:16.948 [2024-11-20 10:25:17.547714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.547725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.559747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.559757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.571778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.571788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.579800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.579810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.587819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.587829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.595842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.595852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.603864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.603874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.611883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.611893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.619904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.619914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.621577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.948 [2024-11-20 10:25:17.631941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.631963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.643978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.643992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.656015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.656031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.948 [2024-11-20 10:25:17.664176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.948 [2024-11-20 10:25:17.668043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.948 [2024-11-20 10:25:17.668059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.680085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.680103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.692112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.692129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.704140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.704156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.716170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.716183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.728205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.728220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.740235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.740246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.752283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.752302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.764312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.764328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.776401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.776416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.788418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.788429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.800448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.800460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.812480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.812490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.824516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.824530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.836550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.836565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.848575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.848586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.860608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.860618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.872649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.872663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.884676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.884687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.896708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.896724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.908742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.908752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.920778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.920791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.207 [2024-11-20 10:25:17.932812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.207 [2024-11-20 10:25:17.932823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:17.944844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:17.944854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:17.956880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:17.956892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:17.968913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:17.968922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:17.980997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:17.981014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 Running I/O for 5 seconds... 00:08:17.466 [2024-11-20 10:25:17.995835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:17.995854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.010663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.010683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.019970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.019990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.029589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.029608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.039250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.039268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.054389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.054408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.069272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.069291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.078465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.078484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.093644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.093662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.104680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.104698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.113650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.113669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.128344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.128367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.139730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.139747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.149410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.149427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.159032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.159050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.168449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.168467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.183488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.183508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.466 [2024-11-20 10:25:18.194527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.466 [2024-11-20 10:25:18.194546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.209099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.209118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.218246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.218265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.227797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.227816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.242724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.242742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.253374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.253392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.268416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.268434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.279488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.279506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.288897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.288915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.303956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.303975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.314960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.314978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.329362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.329382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.338537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.338556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.348161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.348179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.362821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.362839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.371990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.372009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.381470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.381489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.390881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.390900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.400187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.400216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.414996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.415015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.428702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.428722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.437798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.437816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.725 [2024-11-20 10:25:18.446650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.725 [2024-11-20 10:25:18.446668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.461253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.461271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.475578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.475597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.486899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.486919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.501240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.501259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.510267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.510285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.519058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.519076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.533796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.533815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.547252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.547271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.561211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.561235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.570206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.570224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.579531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.579549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.594193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.594222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.603628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.603645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.612806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.612824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.622160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.622177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.631456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.631474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.646198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.646217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.655347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.655366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.665178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.665196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.674796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.674815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.689718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.689737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.984 [2024-11-20 10:25:18.705463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.984 [2024-11-20 10:25:18.705481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.719871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.719889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.728831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.728849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.743419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.743437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.757590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.757609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.768364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.768382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.777838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.777856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.787323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.787342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.796782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.796799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.811695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.811713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.822579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.822596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.837738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.837756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.853377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.853396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.862405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.862423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.871628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.871648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.885967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.886002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.900215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.900233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.915479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.915498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.925045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.925063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.934573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.934590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.949329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.949347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.244 [2024-11-20 10:25:18.958660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.244 [2024-11-20 10:25:18.958678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:18.973437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:18.973459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:18.982600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:18.982620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 16281.00 IOPS, 127.20 MiB/s [2024-11-20T09:25:19.234Z] [2024-11-20 10:25:18.997325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:18.997346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.008674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.008698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.023376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.023395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.031031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.031049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.044430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.044451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.053358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.053376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.062864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.062883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.072148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.072167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.081547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.081565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.090934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.090958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.100340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.100359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.115349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.115368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.131419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.131438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.146034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.146052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.156897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.156916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.171401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.171419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.185191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.185210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.194195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.194214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.203684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.203703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.212898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.212918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.503 [2024-11-20 10:25:19.222311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.503 [2024-11-20 10:25:19.222334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.237249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.237268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.251350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.251369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.260264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.260284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.269521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.269540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.278767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.278786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.293237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.293256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.302229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.302247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.316599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.316619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.324349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.324368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.333478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.333497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.348156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.348177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.361891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.361909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.371085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.371102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.380397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.380416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.389793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.389811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.404872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.404890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.415638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.415655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.425294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.425312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.434727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.434749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.443555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.443573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.453183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.453202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.462568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.462585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.471418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.471436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.481356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.481373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.763 [2024-11-20 10:25:19.490905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.763 [2024-11-20 10:25:19.490923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.505657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.505674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.519792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.519811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.528961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.528979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.537754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.537772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.546450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.546468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.561153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.561171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.574827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.574846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.584101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.584120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.593610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.593628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.608460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.608478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.619021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.619040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.628120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.628138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.636988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.637012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.646523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.646542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.656039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.656057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.671167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.671185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.682210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.682229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.691245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.691262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.700874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.700892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.710288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.710306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.724963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.724981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.739247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.739265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.022 [2024-11-20 10:25:19.750621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.022 [2024-11-20 10:25:19.750639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.281 [2024-11-20 10:25:19.760251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.760269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.770265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.770283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.784866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.784885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.794198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.794216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.803732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.803749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.813057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.813075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.823051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.823070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.837591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.837610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.846674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.846693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.856176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.856195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.865123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.865142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.874413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.874431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.889174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.889193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.898284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.898303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.907054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.907074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.916245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.916262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.926111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.926129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.940461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.940480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.949664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.949683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.958555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.958574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.967859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.967877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:19.982712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.982730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 16382.50 IOPS, 127.99 MiB/s [2024-11-20T09:25:20.013Z] [2024-11-20 10:25:19.993726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:19.993744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.282 [2024-11-20 10:25:20.002577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.282 [2024-11-20 10:25:20.002595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.012298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.012396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.027709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.027729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.041625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.041644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.055916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.055934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.065108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.065126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.079983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.080002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.089615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.089635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.104597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.104616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.118841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.118860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.132919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.132938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.147488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.147507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.158215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.158233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.168163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.168181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.182782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.182801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.196160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.196180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.205329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.205347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.541 [2024-11-20 10:25:20.215040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.541 [2024-11-20 10:25:20.215058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.542 [2024-11-20 10:25:20.223888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.542 [2024-11-20 10:25:20.223906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.542 [2024-11-20 10:25:20.238807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.542 [2024-11-20 10:25:20.238826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.542 [2024-11-20 10:25:20.249738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.542 [2024-11-20 10:25:20.249756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.542 [2024-11-20 10:25:20.258821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.542 [2024-11-20 10:25:20.258840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.542 [2024-11-20 10:25:20.268056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.542 [2024-11-20 10:25:20.268079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.283185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.283203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.298636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.298656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.312802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.312821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.323318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.323336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.338531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.338549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.354165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.354185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.368586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.368606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.379867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.379887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.394732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.394752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.405588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.405608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.420294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.420314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.431357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.431375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.445790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.445809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.459570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.800 [2024-11-20 10:25:20.459589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.800 [2024-11-20 10:25:20.474213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.801 [2024-11-20 10:25:20.474232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.801 [2024-11-20 10:25:20.488195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.801 [2024-11-20 10:25:20.488214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.801 [2024-11-20 10:25:20.502512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.801 [2024-11-20 10:25:20.502531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.801 [2024-11-20 10:25:20.514189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.801 [2024-11-20 10:25:20.514209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.801 [2024-11-20 10:25:20.528596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.801 [2024-11-20 10:25:20.528621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.543038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.543057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.558461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.558480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.573032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.573052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.586795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.586815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.601210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.601230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.615614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.615633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.629493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.629512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.644185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.644204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.659333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.659352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.674054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.674073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.689089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.689109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.703214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.703233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.717425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.717450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.731533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.731552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.746030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.746049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.757522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.757540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.772146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.772165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.060 [2024-11-20 10:25:20.786816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.060 [2024-11-20 10:25:20.786834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.803088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.803110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.813870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.813888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.828666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.828684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.839473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.839490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.854035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.854053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.868450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.868468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.879544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.879562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.894579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.894597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.910387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.910406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.924973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.924992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.938827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.938846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.953213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.953232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.967628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.967646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:20.981586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.981604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 16363.00 IOPS, 127.84 MiB/s [2024-11-20T09:25:21.050Z] [2024-11-20 10:25:20.995541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:20.995560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:21.009722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:21.009740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:21.020934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:21.020957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.319 [2024-11-20 10:25:21.035445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.319 [2024-11-20 10:25:21.035465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.049942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.049967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.061270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.061289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.075807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.075826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.089788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.089807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.103715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.103734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.117840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.117858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.132149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.132169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.578 [2024-11-20 10:25:21.146296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.578 [2024-11-20 10:25:21.146315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.161059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.161077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.176647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.176665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.191329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.191348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.202712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.202731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.217549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.217568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.228294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.228312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.242467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.242486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.256937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.256962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.267750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.267768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.282311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.282329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.579 [2024-11-20 10:25:21.295983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.579 [2024-11-20 10:25:21.296001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.310578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.310602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.321471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.321490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.336098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.336117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.350154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.350173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.364409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.364427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.375347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.375365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.389630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.389648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.403134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.403152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.412708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.412726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.426883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.426901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.441129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.441148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.454863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.454881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.469160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.469179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.482960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.482979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.497244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.838 [2024-11-20 10:25:21.497262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.838 [2024-11-20 10:25:21.511326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.839 [2024-11-20 10:25:21.511344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.839 [2024-11-20 10:25:21.525264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.839 [2024-11-20 10:25:21.525282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.839 [2024-11-20 10:25:21.539170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.839 [2024-11-20 10:25:21.539189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.839 [2024-11-20 10:25:21.553293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.839 [2024-11-20 10:25:21.553312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.567635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.567653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.578702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.578719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.593640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.593658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.608956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.608974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.618757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.618774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.633455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.633473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.647181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.647200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.661578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.661596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.672099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.672117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.686673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.686692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.700956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.700991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.715161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.715180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.729175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.729196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.743431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.743450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.754149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.754169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.769091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.769109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.784506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.784526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.799068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.799087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.813040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.813060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.098 [2024-11-20 10:25:21.827173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.098 [2024-11-20 10:25:21.827192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.841592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.841611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.852570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.852589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.867142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.867161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.881070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.881089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.895410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.895429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.908789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.908808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.923446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.923465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.934588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.934607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.949262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.949282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.963062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.963081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.977748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.977767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:21.988962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:21.988981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 16386.25 IOPS, 128.02 MiB/s [2024-11-20T09:25:22.088Z] [2024-11-20 10:25:22.003530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:22.003549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:22.017702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:22.017720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:22.031745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:22.031764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:22.045677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:22.045696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:22.060189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:22.060208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.357 [2024-11-20 10:25:22.074742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.357 [2024-11-20 10:25:22.074762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.090244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.090270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.104937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.104964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.116001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.116025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.130124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.130142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.144043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.144062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.158291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.158309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.172429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.172448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.186926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.186943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.202126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.202144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.216367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.216385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.227514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.616 [2024-11-20 10:25:22.227532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.616 [2024-11-20 10:25:22.242687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.242721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.253657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.253675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.267997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.268015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.281623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.281641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.295903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.295920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.309958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.309976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.324296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.324315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.617 [2024-11-20 10:25:22.335382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.617 [2024-11-20 10:25:22.335400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.350288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.350313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.361339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.361357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.375203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.375222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.389839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.389857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.401137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.401155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.416118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.416137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.426767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.426785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.441054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.441072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.455188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.455217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.466032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.466050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.480565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.480584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.494772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.494790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.508734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.508753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.522775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.522793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.537425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.537443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.552742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.552761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.567001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.567020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.578016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.578034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.875 [2024-11-20 10:25:22.592526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.875 [2024-11-20 10:25:22.592545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.606358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.606380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.620433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.620452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.634564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.634582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.648460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.648479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.662312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.662330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.676871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.676889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.688189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.688218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.702488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.702506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.716566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.716584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.731753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.731771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.746009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.746027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.760439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.760458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.774850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.774868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.786058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.786075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.800735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.800753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.814788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.814806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.828476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.828494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.842381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.842399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.134 [2024-11-20 10:25:22.856209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.134 [2024-11-20 10:25:22.856228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.870317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.870335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.884843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.884862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.896238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.896256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.910318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.910337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.924702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.924721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.938556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.938575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.952766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.952785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.962369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.962387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.976905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.976926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:22.990851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:22.990871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 16415.80 IOPS, 128.25 MiB/s [2024-11-20T09:25:23.124Z] [2024-11-20 10:25:23.003742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.003761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 00:08:22.393 Latency(us) 00:08:22.393 [2024-11-20T09:25:23.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.393 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:22.393 Nvme1n1 : 5.01 16418.28 128.27 0.00 0.00 7788.49 3433.52 16298.52 00:08:22.393 [2024-11-20T09:25:23.124Z] =================================================================================================================== 00:08:22.393 [2024-11-20T09:25:23.124Z] Total : 16418.28 128.27 0.00 0.00 7788.49 3433.52 16298.52 00:08:22.393 [2024-11-20 10:25:23.013134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.013150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.025168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.025182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.037205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.037225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.049234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.049251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.061266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.061280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.073295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.073310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.085327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.085341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.097358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.097372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.109392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.393 [2024-11-20 10:25:23.109406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.393 [2024-11-20 10:25:23.121423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.394 [2024-11-20 10:25:23.121433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.653 [2024-11-20 10:25:23.133460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.653 [2024-11-20 10:25:23.133474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.653 [2024-11-20 10:25:23.145488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.653 [2024-11-20 10:25:23.145501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.653 [2024-11-20 10:25:23.157520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.653 [2024-11-20 10:25:23.157531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3361223) - No such process 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3361223 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.653 delay0 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.653 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:22.653 [2024-11-20 10:25:23.349120] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:29.211 [2024-11-20 10:25:29.608215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba3ad0 is same with the state(6) to be set 00:08:29.211 [2024-11-20 10:25:29.608261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba3ad0 is same with the state(6) to be set 00:08:29.211 Initializing NVMe Controllers 00:08:29.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:29.211 Initialization complete. Launching workers. 00:08:29.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 761 00:08:29.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1034, failed to submit 47 00:08:29.211 success 845, unsuccessful 189, failed 0 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.211 rmmod nvme_tcp 00:08:29.211 rmmod nvme_fabrics 00:08:29.211 rmmod nvme_keyring 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3359384 ']' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3359384 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3359384 ']' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3359384 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3359384 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3359384' 00:08:29.211 killing process with pid 3359384 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3359384 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3359384 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.211 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.749 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.749 00:08:31.749 real 0m31.588s 00:08:31.749 user 0m42.528s 00:08:31.749 sys 0m10.962s 00:08:31.749 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.749 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.749 ************************************ 00:08:31.749 END TEST nvmf_zcopy 00:08:31.749 ************************************ 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.749 ************************************ 00:08:31.749 START TEST nvmf_nmic 00:08:31.749 ************************************ 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:31.749 * Looking for test storage... 00:08:31.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.749 --rc genhtml_branch_coverage=1 00:08:31.749 --rc genhtml_function_coverage=1 00:08:31.749 --rc genhtml_legend=1 00:08:31.749 --rc geninfo_all_blocks=1 00:08:31.749 --rc geninfo_unexecuted_blocks=1 00:08:31.749 00:08:31.749 ' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.749 --rc genhtml_branch_coverage=1 00:08:31.749 --rc genhtml_function_coverage=1 00:08:31.749 --rc genhtml_legend=1 00:08:31.749 --rc geninfo_all_blocks=1 00:08:31.749 --rc geninfo_unexecuted_blocks=1 00:08:31.749 00:08:31.749 ' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.749 --rc genhtml_branch_coverage=1 00:08:31.749 --rc genhtml_function_coverage=1 00:08:31.749 --rc genhtml_legend=1 00:08:31.749 --rc geninfo_all_blocks=1 00:08:31.749 --rc geninfo_unexecuted_blocks=1 00:08:31.749 00:08:31.749 ' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.749 --rc genhtml_branch_coverage=1 00:08:31.749 --rc genhtml_function_coverage=1 00:08:31.749 --rc genhtml_legend=1 00:08:31.749 --rc geninfo_all_blocks=1 00:08:31.749 --rc geninfo_unexecuted_blocks=1 00:08:31.749 00:08:31.749 ' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.749 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.750 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:38.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:38.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:38.322 Found net devices under 0000:86:00.0: cvl_0_0 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.322 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:38.322 Found net devices under 0000:86:00.1: cvl_0_1 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.323 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:08:38.323 00:08:38.323 --- 10.0.0.2 ping statistics --- 00:08:38.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.323 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:38.323 00:08:38.323 --- 10.0.0.1 ping statistics --- 00:08:38.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.323 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3366819 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3366819 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3366819 ']' 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.323 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.323 [2024-11-20 10:25:38.298622] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:08:38.323 [2024-11-20 10:25:38.298667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.323 [2024-11-20 10:25:38.377674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.323 [2024-11-20 10:25:38.421179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.323 [2024-11-20 10:25:38.421217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.323 [2024-11-20 10:25:38.421224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.323 [2024-11-20 10:25:38.421230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.323 [2024-11-20 10:25:38.421235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.323 [2024-11-20 10:25:38.422792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.323 [2024-11-20 10:25:38.422900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.323 [2024-11-20 10:25:38.422988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.323 [2024-11-20 10:25:38.422987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 [2024-11-20 10:25:39.174477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 Malloc0 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 [2024-11-20 10:25:39.239497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:38.584 test case1: single bdev can't be used in multiple subsystems 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 [2024-11-20 10:25:39.263348] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:38.584 [2024-11-20 10:25:39.263368] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:38.584 [2024-11-20 10:25:39.263376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.584 request: 00:08:38.584 { 00:08:38.584 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.584 "namespace": { 00:08:38.584 "bdev_name": "Malloc0", 00:08:38.584 "no_auto_visible": false 00:08:38.584 }, 00:08:38.584 "method": "nvmf_subsystem_add_ns", 00:08:38.584 "req_id": 1 00:08:38.584 } 00:08:38.584 Got JSON-RPC error response 00:08:38.584 response: 00:08:38.584 { 00:08:38.584 "code": -32602, 00:08:38.584 "message": "Invalid parameters" 00:08:38.584 } 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:38.584 Adding namespace failed - expected result. 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:38.584 test case2: host connect to nvmf target in multiple paths 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 [2024-11-20 10:25:39.275485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.584 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.988 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:41.366 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.366 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:41.366 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.366 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:41.366 10:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:43.270 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:43.270 [global] 00:08:43.270 thread=1 00:08:43.270 invalidate=1 00:08:43.270 rw=write 00:08:43.270 time_based=1 00:08:43.270 runtime=1 00:08:43.270 ioengine=libaio 00:08:43.270 direct=1 00:08:43.270 bs=4096 00:08:43.270 iodepth=1 00:08:43.270 norandommap=0 00:08:43.270 numjobs=1 00:08:43.270 00:08:43.270 verify_dump=1 00:08:43.270 verify_backlog=512 00:08:43.270 verify_state_save=0 00:08:43.270 do_verify=1 00:08:43.270 verify=crc32c-intel 00:08:43.270 [job0] 00:08:43.270 filename=/dev/nvme0n1 00:08:43.270 Could not set queue depth (nvme0n1) 00:08:43.270 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.270 fio-3.35 00:08:43.270 Starting 1 thread 00:08:44.646 00:08:44.646 job0: (groupid=0, jobs=1): err= 0: pid=3367906: Wed Nov 20 10:25:45 2024 00:08:44.646 read: IOPS=2447, BW=9790KiB/s (10.0MB/s)(9800KiB/1001msec) 00:08:44.646 slat (nsec): min=6332, max=30393, avg=7425.55, stdev=963.29 00:08:44.646 clat (usec): min=174, max=460, avg=232.70, stdev=20.92 00:08:44.646 lat (usec): min=182, max=467, avg=240.12, stdev=20.95 00:08:44.646 clat percentiles (usec): 00:08:44.646 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:08:44.646 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:08:44.646 | 70.00th=[ 241], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:08:44.646 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 383], 00:08:44.646 | 99.99th=[ 461] 00:08:44.646 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:44.646 slat (usec): min=9, max=24597, avg=20.24, stdev=485.94 00:08:44.646 clat (usec): min=107, max=328, avg=136.55, stdev=22.27 00:08:44.646 lat (usec): min=118, max=24884, avg=156.79, stdev=489.43 00:08:44.646 clat percentiles (usec): 00:08:44.646 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 124], 00:08:44.646 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 130], 00:08:44.646 | 70.00th=[ 133], 80.00th=[ 143], 90.00th=[ 174], 95.00th=[ 182], 00:08:44.646 | 99.00th=[ 208], 99.50th=[ 255], 99.90th=[ 285], 99.95th=[ 289], 00:08:44.646 | 99.99th=[ 330] 00:08:44.646 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:08:44.646 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:44.646 lat (usec) : 250=86.95%, 500=13.05% 00:08:44.646 cpu : usr=2.20%, sys=4.80%, ctx=5013, majf=0, minf=1 00:08:44.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:44.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.646 issued rwts: total=2450,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:44.646 00:08:44.646 Run status group 0 (all jobs): 00:08:44.646 READ: bw=9790KiB/s (10.0MB/s), 9790KiB/s-9790KiB/s (10.0MB/s-10.0MB/s), io=9800KiB (10.0MB), run=1001-1001msec 00:08:44.646 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:08:44.646 00:08:44.646 Disk stats (read/write): 00:08:44.646 nvme0n1: ios=2074/2497, merge=0/0, ticks=1458/336, in_queue=1794, util=98.60% 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:44.646 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.647 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:44.647 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.647 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.647 rmmod nvme_tcp 00:08:44.647 rmmod nvme_fabrics 00:08:44.905 rmmod nvme_keyring 00:08:44.905 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3366819 ']' 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3366819 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3366819 ']' 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3366819 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3366819 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3366819' 00:08:44.906 killing process with pid 3366819 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3366819 00:08:44.906 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3366819 00:08:45.164 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.164 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.164 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.164 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.165 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.071 00:08:47.071 real 0m15.669s 00:08:47.071 user 0m35.871s 00:08:47.071 sys 0m5.368s 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.071 ************************************ 00:08:47.071 END TEST nvmf_nmic 00:08:47.071 ************************************ 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.071 ************************************ 00:08:47.071 START TEST nvmf_fio_target 00:08:47.071 ************************************ 00:08:47.071 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.331 * Looking for test storage... 00:08:47.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.331 --rc genhtml_branch_coverage=1 00:08:47.331 --rc genhtml_function_coverage=1 00:08:47.331 --rc genhtml_legend=1 00:08:47.331 --rc geninfo_all_blocks=1 00:08:47.331 --rc geninfo_unexecuted_blocks=1 00:08:47.331 00:08:47.331 ' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.331 --rc genhtml_branch_coverage=1 00:08:47.331 --rc genhtml_function_coverage=1 00:08:47.331 --rc genhtml_legend=1 00:08:47.331 --rc geninfo_all_blocks=1 00:08:47.331 --rc geninfo_unexecuted_blocks=1 00:08:47.331 00:08:47.331 ' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.331 --rc genhtml_branch_coverage=1 00:08:47.331 --rc genhtml_function_coverage=1 00:08:47.331 --rc genhtml_legend=1 00:08:47.331 --rc geninfo_all_blocks=1 00:08:47.331 --rc geninfo_unexecuted_blocks=1 00:08:47.331 00:08:47.331 ' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.331 --rc genhtml_branch_coverage=1 00:08:47.331 --rc genhtml_function_coverage=1 00:08:47.331 --rc genhtml_legend=1 00:08:47.331 --rc geninfo_all_blocks=1 00:08:47.331 --rc geninfo_unexecuted_blocks=1 00:08:47.331 00:08:47.331 ' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.331 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.332 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:53.905 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:53.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.905 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:53.906 Found net devices under 0000:86:00.0: cvl_0_0 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:53.906 Found net devices under 0000:86:00.1: cvl_0_1 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:08:53.906 00:08:53.906 --- 10.0.0.2 ping statistics --- 00:08:53.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.906 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:53.906 00:08:53.906 --- 10.0.0.1 ping statistics --- 00:08:53.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.906 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3371701 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3371701 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3371701 ']' 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.906 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.906 [2024-11-20 10:25:54.046708] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:08:53.906 [2024-11-20 10:25:54.046758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.906 [2024-11-20 10:25:54.126309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.906 [2024-11-20 10:25:54.171075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.906 [2024-11-20 10:25:54.171109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.906 [2024-11-20 10:25:54.171116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.906 [2024-11-20 10:25:54.171123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.906 [2024-11-20 10:25:54.171128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.906 [2024-11-20 10:25:54.172661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.906 [2024-11-20 10:25:54.172768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.906 [2024-11-20 10:25:54.172875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.906 [2024-11-20 10:25:54.172876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.906 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.906 [2024-11-20 10:25:54.486777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.907 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.165 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:54.165 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.423 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:54.423 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.683 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:54.683 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.683 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:54.683 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:54.942 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.200 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:55.200 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.459 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:55.459 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.718 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:55.718 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:55.718 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.976 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:55.976 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.235 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.235 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.494 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.494 [2024-11-20 10:25:57.198418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.753 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:56.753 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:57.012 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.391 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:58.391 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:58.391 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.391 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:58.391 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:58.391 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:00.296 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:00.296 [global] 00:09:00.296 thread=1 00:09:00.296 invalidate=1 00:09:00.296 rw=write 00:09:00.296 time_based=1 00:09:00.296 runtime=1 00:09:00.296 ioengine=libaio 00:09:00.296 direct=1 00:09:00.296 bs=4096 00:09:00.296 iodepth=1 00:09:00.296 norandommap=0 00:09:00.296 numjobs=1 00:09:00.296 00:09:00.296 verify_dump=1 00:09:00.296 verify_backlog=512 00:09:00.296 verify_state_save=0 00:09:00.296 do_verify=1 00:09:00.296 verify=crc32c-intel 00:09:00.296 [job0] 00:09:00.296 filename=/dev/nvme0n1 00:09:00.296 [job1] 00:09:00.296 filename=/dev/nvme0n2 00:09:00.296 [job2] 00:09:00.296 filename=/dev/nvme0n3 00:09:00.296 [job3] 00:09:00.296 filename=/dev/nvme0n4 00:09:00.296 Could not set queue depth (nvme0n1) 00:09:00.296 Could not set queue depth (nvme0n2) 00:09:00.296 Could not set queue depth (nvme0n3) 00:09:00.296 Could not set queue depth (nvme0n4) 00:09:00.554 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.554 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.554 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.555 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.555 fio-3.35 00:09:00.555 Starting 4 threads 00:09:01.939 00:09:01.939 job0: (groupid=0, jobs=1): err= 0: pid=3373176: Wed Nov 20 10:26:02 2024 00:09:01.939 read: IOPS=22, BW=90.4KiB/s (92.5kB/s)(92.0KiB/1018msec) 00:09:01.939 slat (nsec): min=9753, max=22354, avg=13009.74, stdev=4381.23 00:09:01.939 clat (usec): min=215, max=42934, avg=39262.25, stdev=8522.67 00:09:01.939 lat (usec): min=227, max=42957, avg=39275.26, stdev=8522.81 00:09:01.939 clat percentiles (usec): 00:09:01.939 | 1.00th=[ 217], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:01.939 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.939 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.939 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:01.939 | 99.99th=[42730] 00:09:01.939 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:01.939 slat (nsec): min=10132, max=42869, avg=11848.45, stdev=2038.97 00:09:01.939 clat (usec): min=122, max=287, avg=208.40, stdev=39.33 00:09:01.939 lat (usec): min=133, max=300, avg=220.24, stdev=39.69 00:09:01.939 clat percentiles (usec): 00:09:01.939 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 169], 00:09:01.939 | 30.00th=[ 186], 40.00th=[ 202], 50.00th=[ 219], 60.00th=[ 229], 00:09:01.939 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 262], 00:09:01.939 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 289], 00:09:01.939 | 99.99th=[ 289] 00:09:01.939 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.939 lat (usec) : 250=83.36%, 500=12.52% 00:09:01.939 lat (msec) : 50=4.11% 00:09:01.939 cpu : usr=0.69%, sys=0.59%, ctx=535, majf=0, minf=1 00:09:01.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.939 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.939 job1: (groupid=0, jobs=1): err= 0: pid=3373197: Wed Nov 20 10:26:02 2024 00:09:01.939 read: IOPS=23, BW=92.8KiB/s (95.1kB/s)(96.0KiB/1034msec) 00:09:01.939 slat (nsec): min=9779, max=23456, avg=21565.88, stdev=3546.24 00:09:01.939 clat (usec): min=314, max=41052, avg=39238.76, stdev=8291.33 00:09:01.939 lat (usec): min=337, max=41074, avg=39260.32, stdev=8291.13 00:09:01.939 clat percentiles (usec): 00:09:01.939 | 1.00th=[ 314], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:01.939 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.939 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.939 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:01.939 | 99.99th=[41157] 00:09:01.939 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:01.939 slat (nsec): min=10483, max=43805, avg=11967.24, stdev=2212.11 00:09:01.939 clat (usec): min=139, max=256, avg=162.88, stdev=13.03 00:09:01.939 lat (usec): min=151, max=300, avg=174.85, stdev=13.76 00:09:01.939 clat percentiles (usec): 00:09:01.939 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:09:01.939 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:01.939 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:09:01.939 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 258], 99.95th=[ 258], 00:09:01.939 | 99.99th=[ 258] 00:09:01.939 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.939 lat (usec) : 250=95.34%, 500=0.37% 00:09:01.939 lat (msec) : 50=4.29% 00:09:01.939 cpu : usr=0.68%, sys=0.58%, ctx=537, majf=0, minf=1 00:09:01.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.940 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.940 job2: (groupid=0, jobs=1): err= 0: pid=3373221: Wed Nov 20 10:26:02 2024 00:09:01.940 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:09:01.940 slat (nsec): min=11274, max=25225, avg=23273.18, stdev=2740.51 00:09:01.940 clat (usec): min=40533, max=42008, avg=40989.17, stdev=248.03 00:09:01.940 lat (usec): min=40545, max=42032, avg=41012.44, stdev=249.21 00:09:01.940 clat percentiles (usec): 00:09:01.940 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:01.940 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.940 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.940 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.940 | 99.99th=[42206] 00:09:01.940 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:01.940 slat (nsec): min=11155, max=56001, avg=15864.55, stdev=7652.50 00:09:01.940 clat (usec): min=128, max=344, avg=173.93, stdev=18.39 00:09:01.940 lat (usec): min=156, max=358, avg=189.80, stdev=20.16 00:09:01.940 clat percentiles (usec): 00:09:01.940 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 161], 00:09:01.940 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:09:01.940 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 192], 95.00th=[ 200], 00:09:01.940 | 99.00th=[ 229], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 347], 00:09:01.940 | 99.99th=[ 347] 00:09:01.940 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.940 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.940 lat (usec) : 250=95.32%, 500=0.56% 00:09:01.940 lat (msec) : 50=4.12% 00:09:01.940 cpu : usr=0.20%, sys=1.30%, ctx=536, majf=0, minf=1 00:09:01.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.940 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.940 job3: (groupid=0, jobs=1): err= 0: pid=3373228: Wed Nov 20 10:26:02 2024 00:09:01.940 read: IOPS=2277, BW=9111KiB/s (9330kB/s)(9120KiB/1001msec) 00:09:01.940 slat (nsec): min=7446, max=35884, avg=8585.31, stdev=1249.13 00:09:01.940 clat (usec): min=185, max=460, avg=220.76, stdev=15.89 00:09:01.940 lat (usec): min=193, max=488, avg=229.35, stdev=16.03 00:09:01.940 clat percentiles (usec): 00:09:01.940 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:09:01.940 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:09:01.940 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 247], 00:09:01.940 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 293], 00:09:01.940 | 99.99th=[ 461] 00:09:01.940 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:01.940 slat (nsec): min=10812, max=55141, avg=12058.51, stdev=1868.70 00:09:01.940 clat (usec): min=123, max=324, avg=168.70, stdev=37.64 00:09:01.940 lat (usec): min=134, max=379, avg=180.76, stdev=37.89 00:09:01.940 clat percentiles (usec): 00:09:01.940 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:01.940 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:09:01.940 | 70.00th=[ 161], 80.00th=[ 204], 90.00th=[ 241], 95.00th=[ 243], 00:09:01.940 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 285], 00:09:01.940 | 99.99th=[ 326] 00:09:01.940 bw ( KiB/s): min=10296, max=10296, per=64.98%, avg=10296.00, stdev= 0.00, samples=1 00:09:01.940 iops : min= 2574, max= 2574, avg=2574.00, stdev= 0.00, samples=1 00:09:01.940 lat (usec) : 250=97.48%, 500=2.52% 00:09:01.940 cpu : usr=5.10%, sys=6.80%, ctx=4841, majf=0, minf=1 00:09:01.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.940 issued rwts: total=2280,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.940 00:09:01.940 Run status group 0 (all jobs): 00:09:01.940 READ: bw=9087KiB/s (9305kB/s), 87.8KiB/s-9111KiB/s (89.9kB/s-9330kB/s), io=9396KiB (9622kB), run=1001-1034msec 00:09:01.940 WRITE: bw=15.5MiB/s (16.2MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1034msec 00:09:01.940 00:09:01.940 Disk stats (read/write): 00:09:01.940 nvme0n1: ios=68/512, merge=0/0, ticks=714/98, in_queue=812, util=86.07% 00:09:01.940 nvme0n2: ios=42/512, merge=0/0, ticks=1732/77, in_queue=1809, util=97.96% 00:09:01.940 nvme0n3: ios=73/512, merge=0/0, ticks=1140/78, in_queue=1218, util=97.80% 00:09:01.940 nvme0n4: ios=2008/2048, merge=0/0, ticks=1372/327, in_queue=1699, util=97.89% 00:09:01.940 10:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:01.940 [global] 00:09:01.940 thread=1 00:09:01.940 invalidate=1 00:09:01.940 rw=randwrite 00:09:01.940 time_based=1 00:09:01.940 runtime=1 00:09:01.940 ioengine=libaio 00:09:01.940 direct=1 00:09:01.940 bs=4096 00:09:01.940 iodepth=1 00:09:01.940 norandommap=0 00:09:01.940 numjobs=1 00:09:01.940 00:09:01.940 verify_dump=1 00:09:01.940 verify_backlog=512 00:09:01.940 verify_state_save=0 00:09:01.940 do_verify=1 00:09:01.940 verify=crc32c-intel 00:09:01.940 [job0] 00:09:01.940 filename=/dev/nvme0n1 00:09:01.940 [job1] 00:09:01.940 filename=/dev/nvme0n2 00:09:01.940 [job2] 00:09:01.940 filename=/dev/nvme0n3 00:09:01.940 [job3] 00:09:01.940 filename=/dev/nvme0n4 00:09:01.940 Could not set queue depth (nvme0n1) 00:09:01.940 Could not set queue depth (nvme0n2) 00:09:01.940 Could not set queue depth (nvme0n3) 00:09:01.940 Could not set queue depth (nvme0n4) 00:09:02.208 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.208 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.208 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.208 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.208 fio-3.35 00:09:02.208 Starting 4 threads 00:09:03.587 00:09:03.587 job0: (groupid=0, jobs=1): err= 0: pid=3373618: Wed Nov 20 10:26:03 2024 00:09:03.587 read: IOPS=2466, BW=9866KiB/s (10.1MB/s)(9876KiB/1001msec) 00:09:03.587 slat (nsec): min=6425, max=32337, avg=7487.74, stdev=1333.66 00:09:03.587 clat (usec): min=178, max=277, avg=219.31, stdev=14.05 00:09:03.587 lat (usec): min=185, max=296, avg=226.80, stdev=14.17 00:09:03.587 clat percentiles (usec): 00:09:03.587 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:09:03.587 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:09:03.587 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 243], 00:09:03.587 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 277], 00:09:03.587 | 99.99th=[ 277] 00:09:03.587 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:03.587 slat (nsec): min=8934, max=53575, avg=10139.54, stdev=1501.52 00:09:03.587 clat (usec): min=128, max=408, avg=157.69, stdev=13.30 00:09:03.587 lat (usec): min=138, max=461, avg=167.83, stdev=13.79 00:09:03.587 clat percentiles (usec): 00:09:03.587 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:03.587 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:09:03.587 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:09:03.587 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 253], 99.95th=[ 293], 00:09:03.587 | 99.99th=[ 408] 00:09:03.588 bw ( KiB/s): min=12288, max=12288, per=32.16%, avg=12288.00, stdev= 0.00, samples=1 00:09:03.588 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:03.588 lat (usec) : 250=98.75%, 500=1.25% 00:09:03.588 cpu : usr=2.00%, sys=5.00%, ctx=5029, majf=0, minf=1 00:09:03.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 issued rwts: total=2469,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.588 job1: (groupid=0, jobs=1): err= 0: pid=3373619: Wed Nov 20 10:26:03 2024 00:09:03.588 read: IOPS=2461, BW=9844KiB/s (10.1MB/s)(9864KiB/1002msec) 00:09:03.588 slat (nsec): min=6308, max=27267, avg=7208.59, stdev=1096.77 00:09:03.588 clat (usec): min=175, max=314, avg=220.06, stdev=24.08 00:09:03.588 lat (usec): min=183, max=321, avg=227.26, stdev=24.10 00:09:03.588 clat percentiles (usec): 00:09:03.588 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:09:03.588 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:09:03.588 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 273], 00:09:03.588 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 306], 99.95th=[ 306], 00:09:03.588 | 99.99th=[ 314] 00:09:03.588 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:09:03.588 slat (nsec): min=8810, max=56290, avg=9885.03, stdev=1386.35 00:09:03.588 clat (usec): min=128, max=326, avg=157.80, stdev=12.87 00:09:03.588 lat (usec): min=138, max=382, avg=167.68, stdev=13.32 00:09:03.588 clat percentiles (usec): 00:09:03.588 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:09:03.588 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:09:03.588 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:09:03.588 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 247], 99.95th=[ 310], 00:09:03.588 | 99.99th=[ 326] 00:09:03.588 bw ( KiB/s): min= 8464, max=12016, per=26.80%, avg=10240.00, stdev=2511.64, samples=2 00:09:03.588 iops : min= 2116, max= 3004, avg=2560.00, stdev=627.91, samples=2 00:09:03.588 lat (usec) : 250=93.51%, 500=6.49% 00:09:03.588 cpu : usr=2.40%, sys=4.40%, ctx=5027, majf=0, minf=1 00:09:03.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 issued rwts: total=2466,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.588 job2: (groupid=0, jobs=1): err= 0: pid=3373620: Wed Nov 20 10:26:03 2024 00:09:03.588 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:03.588 slat (nsec): min=6638, max=38221, avg=9216.70, stdev=1782.46 00:09:03.588 clat (usec): min=195, max=41010, avg=392.93, stdev=2104.70 00:09:03.588 lat (usec): min=204, max=41029, avg=402.14, stdev=2105.21 00:09:03.588 clat percentiles (usec): 00:09:03.588 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 237], 00:09:03.588 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:09:03.588 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[ 408], 95.00th=[ 416], 00:09:03.588 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[41157], 99.95th=[41157], 00:09:03.588 | 99.99th=[41157] 00:09:03.588 write: IOPS=1975, BW=7900KiB/s (8090kB/s)(7908KiB/1001msec); 0 zone resets 00:09:03.588 slat (nsec): min=9111, max=36122, avg=11779.36, stdev=1884.40 00:09:03.588 clat (usec): min=129, max=376, avg=176.89, stdev=20.46 00:09:03.588 lat (usec): min=141, max=413, avg=188.67, stdev=20.17 00:09:03.588 clat percentiles (usec): 00:09:03.588 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:03.588 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:09:03.588 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:09:03.588 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 239], 99.95th=[ 379], 00:09:03.588 | 99.99th=[ 379] 00:09:03.588 bw ( KiB/s): min= 8376, max= 8376, per=21.92%, avg=8376.00, stdev= 0.00, samples=1 00:09:03.588 iops : min= 2094, max= 2094, avg=2094.00, stdev= 0.00, samples=1 00:09:03.588 lat (usec) : 250=77.91%, 500=21.92%, 750=0.03% 00:09:03.588 lat (msec) : 20=0.03%, 50=0.11% 00:09:03.588 cpu : usr=2.00%, sys=3.90%, ctx=3514, majf=0, minf=1 00:09:03.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 issued rwts: total=1536,1977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.588 job3: (groupid=0, jobs=1): err= 0: pid=3373621: Wed Nov 20 10:26:03 2024 00:09:03.588 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:03.588 slat (nsec): min=7122, max=26887, avg=8477.24, stdev=1477.38 00:09:03.588 clat (usec): min=186, max=545, avg=245.41, stdev=26.28 00:09:03.588 lat (usec): min=193, max=555, avg=253.88, stdev=26.53 00:09:03.588 clat percentiles (usec): 00:09:03.588 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 227], 00:09:03.588 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:09:03.588 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 297], 00:09:03.588 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 490], 99.95th=[ 506], 00:09:03.588 | 99.99th=[ 545] 00:09:03.588 write: IOPS=2471, BW=9886KiB/s (10.1MB/s)(9896KiB/1001msec); 0 zone resets 00:09:03.588 slat (nsec): min=9977, max=37366, avg=11784.41, stdev=1932.80 00:09:03.588 clat (usec): min=140, max=352, avg=176.60, stdev=16.59 00:09:03.588 lat (usec): min=151, max=389, avg=188.39, stdev=17.07 00:09:03.588 clat percentiles (usec): 00:09:03.588 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:09:03.588 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:09:03.588 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:09:03.588 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 241], 99.95th=[ 314], 00:09:03.588 | 99.99th=[ 355] 00:09:03.588 bw ( KiB/s): min= 9960, max= 9960, per=26.07%, avg=9960.00, stdev= 0.00, samples=1 00:09:03.588 iops : min= 2490, max= 2490, avg=2490.00, stdev= 0.00, samples=1 00:09:03.588 lat (usec) : 250=84.92%, 500=15.04%, 750=0.04% 00:09:03.588 cpu : usr=4.60%, sys=6.50%, ctx=4522, majf=0, minf=1 00:09:03.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.588 issued rwts: total=2048,2474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.588 00:09:03.588 Run status group 0 (all jobs): 00:09:03.588 READ: bw=33.2MiB/s (34.8MB/s), 6138KiB/s-9866KiB/s (6285kB/s-10.1MB/s), io=33.3MiB (34.9MB), run=1001-1002msec 00:09:03.588 WRITE: bw=37.3MiB/s (39.1MB/s), 7900KiB/s-9.99MiB/s (8090kB/s-10.5MB/s), io=37.4MiB (39.2MB), run=1001-1002msec 00:09:03.588 00:09:03.588 Disk stats (read/write): 00:09:03.588 nvme0n1: ios=2084/2236, merge=0/0, ticks=507/347, in_queue=854, util=87.17% 00:09:03.588 nvme0n2: ios=2079/2254, merge=0/0, ticks=461/330, in_queue=791, util=87.21% 00:09:03.588 nvme0n3: ios=1364/1536, merge=0/0, ticks=521/247, in_queue=768, util=88.85% 00:09:03.588 nvme0n4: ios=1860/2048, merge=0/0, ticks=416/334, in_queue=750, util=89.60% 00:09:03.588 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:03.588 [global] 00:09:03.588 thread=1 00:09:03.588 invalidate=1 00:09:03.588 rw=write 00:09:03.588 time_based=1 00:09:03.588 runtime=1 00:09:03.588 ioengine=libaio 00:09:03.588 direct=1 00:09:03.588 bs=4096 00:09:03.588 iodepth=128 00:09:03.588 norandommap=0 00:09:03.588 numjobs=1 00:09:03.588 00:09:03.588 verify_dump=1 00:09:03.588 verify_backlog=512 00:09:03.588 verify_state_save=0 00:09:03.588 do_verify=1 00:09:03.588 verify=crc32c-intel 00:09:03.588 [job0] 00:09:03.588 filename=/dev/nvme0n1 00:09:03.588 [job1] 00:09:03.588 filename=/dev/nvme0n2 00:09:03.588 [job2] 00:09:03.588 filename=/dev/nvme0n3 00:09:03.588 [job3] 00:09:03.588 filename=/dev/nvme0n4 00:09:03.588 Could not set queue depth (nvme0n1) 00:09:03.588 Could not set queue depth (nvme0n2) 00:09:03.588 Could not set queue depth (nvme0n3) 00:09:03.588 Could not set queue depth (nvme0n4) 00:09:03.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.588 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.588 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.588 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.588 fio-3.35 00:09:03.588 Starting 4 threads 00:09:04.968 00:09:04.968 job0: (groupid=0, jobs=1): err= 0: pid=3373993: Wed Nov 20 10:26:05 2024 00:09:04.968 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:09:04.968 slat (nsec): min=1076, max=10734k, avg=94414.40, stdev=586658.89 00:09:04.968 clat (usec): min=5858, max=23713, avg=11763.80, stdev=2503.57 00:09:04.968 lat (usec): min=5876, max=23739, avg=11858.22, stdev=2549.86 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 7111], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[ 9896], 00:09:04.968 | 30.00th=[10290], 40.00th=[10421], 50.00th=[11338], 60.00th=[12125], 00:09:04.968 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15139], 95.00th=[16057], 00:09:04.968 | 99.00th=[19530], 99.50th=[20579], 99.90th=[22152], 99.95th=[22152], 00:09:04.968 | 99.99th=[23725] 00:09:04.968 write: IOPS=5012, BW=19.6MiB/s (20.5MB/s)(19.8MiB/1013msec); 0 zone resets 00:09:04.968 slat (nsec): min=1928, max=24866k, avg=105986.02, stdev=610033.75 00:09:04.968 clat (usec): min=4526, max=50435, avg=13844.87, stdev=8171.67 00:09:04.968 lat (usec): min=4539, max=50446, avg=13950.85, stdev=8238.09 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:04.968 | 30.00th=[10290], 40.00th=[10421], 50.00th=[11076], 60.00th=[11994], 00:09:04.968 | 70.00th=[12387], 80.00th=[14615], 90.00th=[20317], 95.00th=[37487], 00:09:04.968 | 99.00th=[47449], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:09:04.968 | 99.99th=[50594] 00:09:04.968 bw ( KiB/s): min=16000, max=23560, per=27.22%, avg=19780.00, stdev=5345.73, samples=2 00:09:04.968 iops : min= 4000, max= 5890, avg=4945.00, stdev=1336.43, samples=2 00:09:04.968 lat (msec) : 10=23.01%, 20=70.32%, 50=6.38%, 100=0.29% 00:09:04.968 cpu : usr=3.56%, sys=4.64%, ctx=568, majf=0, minf=1 00:09:04.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:04.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.968 issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.968 job1: (groupid=0, jobs=1): err= 0: pid=3373994: Wed Nov 20 10:26:05 2024 00:09:04.968 read: IOPS=3468, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1006msec) 00:09:04.968 slat (nsec): min=1106, max=19229k, avg=136896.17, stdev=867309.57 00:09:04.968 clat (usec): min=5205, max=43553, avg=16533.81, stdev=5444.30 00:09:04.968 lat (usec): min=5210, max=43577, avg=16670.70, stdev=5518.45 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 8848], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:09:04.968 | 30.00th=[12649], 40.00th=[14353], 50.00th=[15401], 60.00th=[16450], 00:09:04.968 | 70.00th=[17433], 80.00th=[18744], 90.00th=[23462], 95.00th=[29230], 00:09:04.968 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[36963], 00:09:04.968 | 99.99th=[43779] 00:09:04.968 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:09:04.968 slat (nsec): min=1905, max=15636k, avg=139467.92, stdev=846895.25 00:09:04.968 clat (usec): min=7196, max=43867, avg=19223.22, stdev=7575.11 00:09:04.968 lat (usec): min=7204, max=43904, avg=19362.69, stdev=7656.03 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 8848], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:09:04.968 | 30.00th=[12518], 40.00th=[12911], 50.00th=[17171], 60.00th=[20317], 00:09:04.968 | 70.00th=[26346], 80.00th=[26870], 90.00th=[30540], 95.00th=[32900], 00:09:04.968 | 99.00th=[35914], 99.50th=[35914], 99.90th=[40633], 99.95th=[42730], 00:09:04.968 | 99.99th=[43779] 00:09:04.968 bw ( KiB/s): min=12263, max=16384, per=19.71%, avg=14323.50, stdev=2913.99, samples=2 00:09:04.968 iops : min= 3065, max= 4096, avg=3580.50, stdev=729.03, samples=2 00:09:04.968 lat (msec) : 10=1.92%, 20=67.91%, 50=30.17% 00:09:04.968 cpu : usr=1.89%, sys=4.08%, ctx=309, majf=0, minf=1 00:09:04.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:04.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.968 issued rwts: total=3489,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.968 job2: (groupid=0, jobs=1): err= 0: pid=3373996: Wed Nov 20 10:26:05 2024 00:09:04.968 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:09:04.968 slat (nsec): min=1067, max=8819.2k, avg=95119.45, stdev=563355.35 00:09:04.968 clat (usec): min=5649, max=18968, avg=12483.61, stdev=1518.44 00:09:04.968 lat (usec): min=5657, max=18998, avg=12578.73, stdev=1590.13 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11731], 00:09:04.968 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:09:04.968 | 70.00th=[12780], 80.00th=[13698], 90.00th=[14353], 95.00th=[14746], 00:09:04.968 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18744], 00:09:04.968 | 99.99th=[19006] 00:09:04.968 write: IOPS=5385, BW=21.0MiB/s (22.1MB/s)(21.3MiB/1011msec); 0 zone resets 00:09:04.968 slat (nsec): min=1849, max=8391.7k, avg=85338.90, stdev=428940.54 00:09:04.968 clat (usec): min=537, max=19799, avg=11822.03, stdev=2544.97 00:09:04.968 lat (usec): min=546, max=19822, avg=11907.37, stdev=2579.17 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 1991], 5.00th=[ 6456], 10.00th=[ 8979], 20.00th=[11338], 00:09:04.968 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:09:04.968 | 70.00th=[12518], 80.00th=[13566], 90.00th=[13960], 95.00th=[14877], 00:09:04.968 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19530], 99.95th=[19792], 00:09:04.968 | 99.99th=[19792] 00:09:04.968 bw ( KiB/s): min=20240, max=22259, per=29.24%, avg=21249.50, stdev=1427.65, samples=2 00:09:04.968 iops : min= 5060, max= 5564, avg=5312.00, stdev=356.38, samples=2 00:09:04.968 lat (usec) : 750=0.06%, 1000=0.01% 00:09:04.968 lat (msec) : 2=0.48%, 4=0.57%, 10=9.09%, 20=89.80% 00:09:04.968 cpu : usr=2.77%, sys=5.45%, ctx=569, majf=0, minf=2 00:09:04.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:04.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.968 issued rwts: total=5120,5445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.968 job3: (groupid=0, jobs=1): err= 0: pid=3373999: Wed Nov 20 10:26:05 2024 00:09:04.968 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:09:04.968 slat (nsec): min=1356, max=12480k, avg=114098.74, stdev=733202.12 00:09:04.968 clat (usec): min=4207, max=27920, avg=14681.09, stdev=3590.14 00:09:04.968 lat (usec): min=4213, max=27928, avg=14795.19, stdev=3655.89 00:09:04.968 clat percentiles (usec): 00:09:04.968 | 1.00th=[ 6718], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[11600], 00:09:04.968 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13960], 60.00th=[15270], 00:09:04.968 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[20317], 00:09:04.968 | 99.00th=[25297], 99.50th=[26608], 99.90th=[27919], 99.95th=[27919], 00:09:04.968 | 99.99th=[27919] 00:09:04.968 write: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1013msec); 0 zone resets 00:09:04.968 slat (nsec): min=1949, max=9826.6k, avg=112697.08, stdev=507103.73 00:09:04.968 clat (usec): min=951, max=38791, avg=15861.11, stdev=6137.47 00:09:04.968 lat (usec): min=961, max=38795, avg=15973.81, stdev=6183.16 00:09:04.969 clat percentiles (usec): 00:09:04.969 | 1.00th=[ 3752], 5.00th=[ 7242], 10.00th=[ 8848], 20.00th=[11600], 00:09:04.969 | 30.00th=[12256], 40.00th=[13566], 50.00th=[13960], 60.00th=[15139], 00:09:04.969 | 70.00th=[18482], 80.00th=[21365], 90.00th=[25560], 95.00th=[27657], 00:09:04.969 | 99.00th=[31065], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:09:04.969 | 99.99th=[38536] 00:09:04.969 bw ( KiB/s): min=13277, max=20032, per=22.92%, avg=16654.50, stdev=4776.51, samples=2 00:09:04.969 iops : min= 3319, max= 5008, avg=4163.50, stdev=1194.30, samples=2 00:09:04.969 lat (usec) : 1000=0.10% 00:09:04.969 lat (msec) : 4=0.60%, 10=8.72%, 20=75.71%, 50=14.87% 00:09:04.969 cpu : usr=1.98%, sys=5.04%, ctx=547, majf=0, minf=1 00:09:04.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:04.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.969 issued rwts: total=4096,4294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.969 00:09:04.969 Run status group 0 (all jobs): 00:09:04.969 READ: bw=66.8MiB/s (70.0MB/s), 13.5MiB/s-19.8MiB/s (14.2MB/s-20.7MB/s), io=67.6MiB (70.9MB), run=1006-1013msec 00:09:04.969 WRITE: bw=71.0MiB/s (74.4MB/s), 13.9MiB/s-21.0MiB/s (14.6MB/s-22.1MB/s), io=71.9MiB (75.4MB), run=1006-1013msec 00:09:04.969 00:09:04.969 Disk stats (read/write): 00:09:04.969 nvme0n1: ios=4272/4608, merge=0/0, ticks=28845/29060, in_queue=57905, util=98.00% 00:09:04.969 nvme0n2: ios=2805/3072, merge=0/0, ticks=16008/19748, in_queue=35756, util=96.14% 00:09:04.969 nvme0n3: ios=4208/4608, merge=0/0, ticks=17134/19501, in_queue=36635, util=89.06% 00:09:04.969 nvme0n4: ios=3072/3583, merge=0/0, ticks=30164/36198, in_queue=66362, util=89.72% 00:09:04.969 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:04.969 [global] 00:09:04.969 thread=1 00:09:04.969 invalidate=1 00:09:04.969 rw=randwrite 00:09:04.969 time_based=1 00:09:04.969 runtime=1 00:09:04.969 ioengine=libaio 00:09:04.969 direct=1 00:09:04.969 bs=4096 00:09:04.969 iodepth=128 00:09:04.969 norandommap=0 00:09:04.969 numjobs=1 00:09:04.969 00:09:04.969 verify_dump=1 00:09:04.969 verify_backlog=512 00:09:04.969 verify_state_save=0 00:09:04.969 do_verify=1 00:09:04.969 verify=crc32c-intel 00:09:04.969 [job0] 00:09:04.969 filename=/dev/nvme0n1 00:09:04.969 [job1] 00:09:04.969 filename=/dev/nvme0n2 00:09:04.969 [job2] 00:09:04.969 filename=/dev/nvme0n3 00:09:04.969 [job3] 00:09:04.969 filename=/dev/nvme0n4 00:09:04.969 Could not set queue depth (nvme0n1) 00:09:04.969 Could not set queue depth (nvme0n2) 00:09:04.969 Could not set queue depth (nvme0n3) 00:09:04.969 Could not set queue depth (nvme0n4) 00:09:05.228 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.228 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.228 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.228 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.228 fio-3.35 00:09:05.228 Starting 4 threads 00:09:06.609 00:09:06.609 job0: (groupid=0, jobs=1): err= 0: pid=3374371: Wed Nov 20 10:26:07 2024 00:09:06.609 read: IOPS=4526, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1006msec) 00:09:06.609 slat (nsec): min=1481, max=11505k, avg=102773.86, stdev=761167.17 00:09:06.609 clat (usec): min=2108, max=31853, avg=13030.18, stdev=3850.17 00:09:06.609 lat (usec): min=2119, max=31857, avg=13132.95, stdev=3927.44 00:09:06.609 clat percentiles (usec): 00:09:06.609 | 1.00th=[ 5932], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11207], 00:09:06.609 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:09:06.609 | 70.00th=[12780], 80.00th=[13829], 90.00th=[18220], 95.00th=[20841], 00:09:06.609 | 99.00th=[29230], 99.50th=[30540], 99.90th=[31851], 99.95th=[31851], 00:09:06.609 | 99.99th=[31851] 00:09:06.609 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:06.609 slat (usec): min=2, max=9990, avg=104.69, stdev=579.00 00:09:06.609 clat (usec): min=402, max=33469, avg=14790.39, stdev=6257.23 00:09:06.609 lat (usec): min=435, max=33483, avg=14895.08, stdev=6307.80 00:09:06.609 clat percentiles (usec): 00:09:06.609 | 1.00th=[ 2933], 5.00th=[ 6915], 10.00th=[ 8979], 20.00th=[ 9372], 00:09:06.609 | 30.00th=[10421], 40.00th=[10814], 50.00th=[12256], 60.00th=[15401], 00:09:06.609 | 70.00th=[20317], 80.00th=[21627], 90.00th=[22676], 95.00th=[23725], 00:09:06.609 | 99.00th=[30802], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:09:06.609 | 99.99th=[33424] 00:09:06.609 bw ( KiB/s): min=16592, max=20272, per=26.70%, avg=18432.00, stdev=2602.15, samples=2 00:09:06.609 iops : min= 4148, max= 5068, avg=4608.00, stdev=650.54, samples=2 00:09:06.609 lat (usec) : 500=0.02%, 1000=0.08% 00:09:06.609 lat (msec) : 2=0.20%, 4=0.57%, 10=15.71%, 20=64.73%, 50=18.70% 00:09:06.609 cpu : usr=3.68%, sys=6.17%, ctx=418, majf=0, minf=2 00:09:06.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:06.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.609 issued rwts: total=4554,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.609 job1: (groupid=0, jobs=1): err= 0: pid=3374372: Wed Nov 20 10:26:07 2024 00:09:06.609 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:09:06.609 slat (nsec): min=1070, max=19935k, avg=84001.75, stdev=537652.02 00:09:06.609 clat (usec): min=6778, max=31138, avg=11005.51, stdev=3034.26 00:09:06.609 lat (usec): min=6786, max=31143, avg=11089.51, stdev=3054.77 00:09:06.609 clat percentiles (usec): 00:09:06.609 | 1.00th=[ 7373], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9896], 00:09:06.609 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:09:06.609 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12256], 95.00th=[13435], 00:09:06.609 | 99.00th=[28705], 99.50th=[28967], 99.90th=[31065], 99.95th=[31065], 00:09:06.609 | 99.99th=[31065] 00:09:06.609 write: IOPS=5761, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec); 0 zone resets 00:09:06.609 slat (nsec): min=1750, max=12213k, avg=86382.55, stdev=497548.99 00:09:06.609 clat (usec): min=2782, max=40290, avg=11189.32, stdev=3982.17 00:09:06.609 lat (usec): min=2790, max=40297, avg=11275.70, stdev=3997.90 00:09:06.609 clat percentiles (usec): 00:09:06.609 | 1.00th=[ 6652], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 9896], 00:09:06.610 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:09:06.610 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12256], 95.00th=[14615], 00:09:06.610 | 99.00th=[35914], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:09:06.610 | 99.99th=[40109] 00:09:06.610 bw ( KiB/s): min=21008, max=24208, per=32.75%, avg=22608.00, stdev=2262.74, samples=2 00:09:06.610 iops : min= 5252, max= 6052, avg=5652.00, stdev=565.69, samples=2 00:09:06.610 lat (msec) : 4=0.10%, 10=21.63%, 20=75.40%, 50=2.87% 00:09:06.610 cpu : usr=2.30%, sys=5.89%, ctx=543, majf=0, minf=1 00:09:06.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:06.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.610 issued rwts: total=5632,5779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.610 job2: (groupid=0, jobs=1): err= 0: pid=3374373: Wed Nov 20 10:26:07 2024 00:09:06.610 read: IOPS=2659, BW=10.4MiB/s (10.9MB/s)(10.9MiB/1047msec) 00:09:06.610 slat (nsec): min=1111, max=40242k, avg=182537.46, stdev=1441811.95 00:09:06.610 clat (msec): min=7, max=106, avg=25.32, stdev=22.54 00:09:06.610 lat (msec): min=7, max=106, avg=25.50, stdev=22.64 00:09:06.610 clat percentiles (msec): 00:09:06.610 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:09:06.610 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 22], 00:09:06.610 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 57], 95.00th=[ 80], 00:09:06.610 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:09:06.610 | 99.99th=[ 107] 00:09:06.610 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:09:06.610 slat (nsec): min=1884, max=10496k, avg=156648.13, stdev=878817.06 00:09:06.610 clat (usec): min=1186, max=108688, avg=19960.26, stdev=19559.97 00:09:06.610 lat (usec): min=1195, max=108695, avg=20116.91, stdev=19688.07 00:09:06.610 clat percentiles (msec): 00:09:06.610 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 12], 00:09:06.610 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:09:06.610 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 40], 95.00th=[ 65], 00:09:06.610 | 99.00th=[ 105], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 109], 00:09:06.610 | 99.99th=[ 109] 00:09:06.610 bw ( KiB/s): min= 8192, max=16384, per=17.80%, avg=12288.00, stdev=5792.62, samples=2 00:09:06.610 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:09:06.610 lat (msec) : 2=0.17%, 10=4.97%, 20=63.46%, 50=20.13%, 100=9.32% 00:09:06.610 lat (msec) : 250=1.95% 00:09:06.610 cpu : usr=2.01%, sys=2.49%, ctx=311, majf=0, minf=1 00:09:06.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:06.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.610 issued rwts: total=2785,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.610 job3: (groupid=0, jobs=1): err= 0: pid=3374374: Wed Nov 20 10:26:07 2024 00:09:06.610 read: IOPS=4370, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:09:06.610 slat (nsec): min=1124, max=22988k, avg=107837.99, stdev=759891.26 00:09:06.610 clat (usec): min=764, max=58072, avg=13816.36, stdev=8273.42 00:09:06.610 lat (usec): min=3680, max=58080, avg=13924.20, stdev=8315.36 00:09:06.610 clat percentiles (usec): 00:09:06.610 | 1.00th=[ 5997], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[11338], 00:09:06.610 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:09:06.610 | 70.00th=[12780], 80.00th=[13829], 90.00th=[15139], 95.00th=[23200], 00:09:06.610 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:09:06.610 | 99.99th=[57934] 00:09:06.610 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:06.610 slat (nsec): min=1821, max=29102k, avg=109704.48, stdev=724187.00 00:09:06.610 clat (usec): min=6257, max=63414, avg=14360.84, stdev=7457.59 00:09:06.610 lat (usec): min=6264, max=63463, avg=14470.55, stdev=7506.49 00:09:06.610 clat percentiles (usec): 00:09:06.610 | 1.00th=[ 7504], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11338], 00:09:06.610 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:09:06.610 | 70.00th=[13173], 80.00th=[15139], 90.00th=[21627], 95.00th=[24773], 00:09:06.610 | 99.00th=[57410], 99.50th=[59507], 99.90th=[63177], 99.95th=[63177], 00:09:06.610 | 99.99th=[63177] 00:09:06.610 bw ( KiB/s): min=16384, max=20480, per=26.70%, avg=18432.00, stdev=2896.31, samples=2 00:09:06.610 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:06.610 lat (usec) : 1000=0.01% 00:09:06.610 lat (msec) : 4=0.36%, 10=8.30%, 20=80.96%, 50=8.17%, 100=2.20% 00:09:06.610 cpu : usr=2.69%, sys=4.49%, ctx=404, majf=0, minf=1 00:09:06.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:06.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.610 issued rwts: total=4388,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.610 00:09:06.610 Run status group 0 (all jobs): 00:09:06.610 READ: bw=64.8MiB/s (67.9MB/s), 10.4MiB/s-21.9MiB/s (10.9MB/s-23.0MB/s), io=67.8MiB (71.1MB), run=1003-1047msec 00:09:06.610 WRITE: bw=67.4MiB/s (70.7MB/s), 11.5MiB/s-22.5MiB/s (12.0MB/s-23.6MB/s), io=70.6MiB (74.0MB), run=1003-1047msec 00:09:06.610 00:09:06.610 Disk stats (read/write): 00:09:06.610 nvme0n1: ios=3637/3783, merge=0/0, ticks=44552/52210, in_queue=96762, util=97.70% 00:09:06.610 nvme0n2: ios=4608/4915, merge=0/0, ticks=15868/17642, in_queue=33510, util=83.14% 00:09:06.610 nvme0n3: ios=2071/2279, merge=0/0, ticks=17545/19988, in_queue=37533, util=97.84% 00:09:06.610 nvme0n4: ios=3439/3584, merge=0/0, ticks=21206/20645, in_queue=41851, util=88.99% 00:09:06.610 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:06.610 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3374606 00:09:06.610 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:06.610 10:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:06.610 [global] 00:09:06.610 thread=1 00:09:06.610 invalidate=1 00:09:06.610 rw=read 00:09:06.610 time_based=1 00:09:06.610 runtime=10 00:09:06.610 ioengine=libaio 00:09:06.610 direct=1 00:09:06.610 bs=4096 00:09:06.610 iodepth=1 00:09:06.610 norandommap=1 00:09:06.610 numjobs=1 00:09:06.610 00:09:06.610 [job0] 00:09:06.610 filename=/dev/nvme0n1 00:09:06.610 [job1] 00:09:06.610 filename=/dev/nvme0n2 00:09:06.610 [job2] 00:09:06.610 filename=/dev/nvme0n3 00:09:06.610 [job3] 00:09:06.610 filename=/dev/nvme0n4 00:09:06.610 Could not set queue depth (nvme0n1) 00:09:06.610 Could not set queue depth (nvme0n2) 00:09:06.610 Could not set queue depth (nvme0n3) 00:09:06.610 Could not set queue depth (nvme0n4) 00:09:06.869 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.869 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.869 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.869 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.869 fio-3.35 00:09:06.869 Starting 4 threads 00:09:10.160 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:10.160 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:10.160 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=421888, buflen=4096 00:09:10.160 fio: pid=3374750, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.160 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.160 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:10.160 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:09:10.160 fio: pid=3374749, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.160 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.160 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:10.160 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=483328, buflen=4096 00:09:10.160 fio: pid=3374747, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.419 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.419 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:10.419 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9822208, buflen=4096 00:09:10.419 fio: pid=3374748, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.419 00:09:10.419 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3374747: Wed Nov 20 10:26:11 2024 00:09:10.419 read: IOPS=38, BW=152KiB/s (156kB/s)(472KiB/3103msec) 00:09:10.419 slat (usec): min=6, max=23420, avg=215.23, stdev=2145.26 00:09:10.419 clat (usec): min=195, max=42471, avg=25887.45, stdev=19871.12 00:09:10.419 lat (usec): min=203, max=65638, avg=26104.30, stdev=20149.33 00:09:10.419 clat percentiles (usec): 00:09:10.419 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 229], 00:09:10.419 | 30.00th=[ 249], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:09:10.419 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:10.419 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:10.419 | 99.99th=[42730] 00:09:10.419 bw ( KiB/s): min= 96, max= 252, per=4.68%, avg=151.33, stdev=56.48, samples=6 00:09:10.419 iops : min= 24, max= 63, avg=37.83, stdev=14.12, samples=6 00:09:10.419 lat (usec) : 250=30.25%, 500=6.72% 00:09:10.419 lat (msec) : 50=62.18% 00:09:10.419 cpu : usr=0.13%, sys=0.00%, ctx=122, majf=0, minf=1 00:09:10.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.419 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.419 issued rwts: total=119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.419 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3374748: Wed Nov 20 10:26:11 2024 00:09:10.419 read: IOPS=718, BW=2874KiB/s (2943kB/s)(9592KiB/3338msec) 00:09:10.419 slat (usec): min=6, max=17915, avg=20.28, stdev=437.85 00:09:10.419 clat (usec): min=165, max=42004, avg=1359.82, stdev=6780.47 00:09:10.419 lat (usec): min=172, max=59075, avg=1375.18, stdev=6835.11 00:09:10.419 clat percentiles (usec): 00:09:10.419 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:09:10.419 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 202], 00:09:10.419 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 245], 00:09:10.419 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:10.419 | 99.99th=[42206] 00:09:10.419 bw ( KiB/s): min= 152, max= 6896, per=84.88%, avg=2738.67, stdev=3237.68, samples=6 00:09:10.419 iops : min= 38, max= 1724, avg=684.67, stdev=809.42, samples=6 00:09:10.419 lat (usec) : 250=95.50%, 500=1.50%, 750=0.04% 00:09:10.419 lat (msec) : 2=0.08%, 50=2.83% 00:09:10.419 cpu : usr=0.09%, sys=0.78%, ctx=2403, majf=0, minf=2 00:09:10.420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.420 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.420 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.420 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3374749: Wed Nov 20 10:26:11 2024 00:09:10.420 read: IOPS=25, BW=101KiB/s (103kB/s)(292KiB/2894msec) 00:09:10.420 slat (nsec): min=9711, max=35743, avg=22548.08, stdev=2591.32 00:09:10.420 clat (usec): min=355, max=42011, avg=39328.70, stdev=8112.01 00:09:10.420 lat (usec): min=379, max=42033, avg=39351.25, stdev=8110.80 00:09:10.420 clat percentiles (usec): 00:09:10.420 | 1.00th=[ 355], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:10.420 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.420 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.420 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.420 | 99.99th=[42206] 00:09:10.420 bw ( KiB/s): min= 96, max= 120, per=3.10%, avg=100.80, stdev=10.73, samples=5 00:09:10.420 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:09:10.420 lat (usec) : 500=4.05% 00:09:10.420 lat (msec) : 50=94.59% 00:09:10.420 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=1 00:09:10.420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.420 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.420 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.420 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3374750: Wed Nov 20 10:26:11 2024 00:09:10.420 read: IOPS=38, BW=153KiB/s (157kB/s)(412KiB/2692msec) 00:09:10.420 slat (nsec): min=8986, max=48856, avg=18924.28, stdev=7034.90 00:09:10.420 clat (usec): min=195, max=42008, avg=25872.11, stdev=19705.10 00:09:10.420 lat (usec): min=206, max=42035, avg=25890.99, stdev=19704.63 00:09:10.420 clat percentiles (usec): 00:09:10.420 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:09:10.420 | 30.00th=[ 237], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:09:10.420 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.420 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.420 | 99.99th=[42206] 00:09:10.420 bw ( KiB/s): min= 104, max= 192, per=4.71%, avg=152.00, stdev=35.33, samples=5 00:09:10.420 iops : min= 26, max= 48, avg=38.00, stdev= 8.83, samples=5 00:09:10.420 lat (usec) : 250=34.62%, 500=1.92% 00:09:10.420 lat (msec) : 50=62.50% 00:09:10.420 cpu : usr=0.15%, sys=0.00%, ctx=105, majf=0, minf=2 00:09:10.420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.420 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.420 issued rwts: total=104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.420 00:09:10.420 Run status group 0 (all jobs): 00:09:10.420 READ: bw=3226KiB/s (3303kB/s), 101KiB/s-2874KiB/s (103kB/s-2943kB/s), io=10.5MiB (11.0MB), run=2692-3338msec 00:09:10.420 00:09:10.420 Disk stats (read/write): 00:09:10.420 nvme0n1: ios=144/0, merge=0/0, ticks=3603/0, in_queue=3603, util=98.77% 00:09:10.420 nvme0n2: ios=2427/0, merge=0/0, ticks=3997/0, in_queue=3997, util=98.85% 00:09:10.420 nvme0n3: ios=71/0, merge=0/0, ticks=2791/0, in_queue=2791, util=96.17% 00:09:10.420 nvme0n4: ios=125/0, merge=0/0, ticks=2961/0, in_queue=2961, util=99.17% 00:09:10.680 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.680 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:10.938 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.938 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:10.938 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.938 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:11.197 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.197 10:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:11.456 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:11.456 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3374606 00:09:11.456 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:11.456 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.456 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.456 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.457 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.457 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.457 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.457 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:11.716 nvmf hotplug test: fio failed as expected 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.716 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.716 rmmod nvme_tcp 00:09:11.976 rmmod nvme_fabrics 00:09:11.976 rmmod nvme_keyring 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3371701 ']' 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3371701 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3371701 ']' 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3371701 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371701 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371701' 00:09:11.976 killing process with pid 3371701 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3371701 00:09:11.976 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3371701 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.235 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.236 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.236 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.144 00:09:14.144 real 0m27.021s 00:09:14.144 user 1m47.578s 00:09:14.144 sys 0m8.403s 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.144 ************************************ 00:09:14.144 END TEST nvmf_fio_target 00:09:14.144 ************************************ 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.144 10:26:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.404 ************************************ 00:09:14.404 START TEST nvmf_bdevio 00:09:14.404 ************************************ 00:09:14.404 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.404 * Looking for test storage... 00:09:14.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.404 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.404 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.404 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.404 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.405 --rc genhtml_branch_coverage=1 00:09:14.405 --rc genhtml_function_coverage=1 00:09:14.405 --rc genhtml_legend=1 00:09:14.405 --rc geninfo_all_blocks=1 00:09:14.405 --rc geninfo_unexecuted_blocks=1 00:09:14.405 00:09:14.405 ' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.405 --rc genhtml_branch_coverage=1 00:09:14.405 --rc genhtml_function_coverage=1 00:09:14.405 --rc genhtml_legend=1 00:09:14.405 --rc geninfo_all_blocks=1 00:09:14.405 --rc geninfo_unexecuted_blocks=1 00:09:14.405 00:09:14.405 ' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.405 --rc genhtml_branch_coverage=1 00:09:14.405 --rc genhtml_function_coverage=1 00:09:14.405 --rc genhtml_legend=1 00:09:14.405 --rc geninfo_all_blocks=1 00:09:14.405 --rc geninfo_unexecuted_blocks=1 00:09:14.405 00:09:14.405 ' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.405 --rc genhtml_branch_coverage=1 00:09:14.405 --rc genhtml_function_coverage=1 00:09:14.405 --rc genhtml_legend=1 00:09:14.405 --rc geninfo_all_blocks=1 00:09:14.405 --rc geninfo_unexecuted_blocks=1 00:09:14.405 00:09:14.405 ' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.405 10:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.069 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.069 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.069 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.069 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:21.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:21.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:21.070 Found net devices under 0000:86:00.0: cvl_0_0 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:21.070 Found net devices under 0000:86:00.1: cvl_0_1 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.070 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.071 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.071 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.071 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.071 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.071 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:09:21.071 00:09:21.071 --- 10.0.0.2 ping statistics --- 00:09:21.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.071 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:09:21.071 00:09:21.071 --- 10.0.0.1 ping statistics --- 00:09:21.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.071 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3379222 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3379222 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3379222 ']' 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.071 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.071 [2024-11-20 10:26:21.137864] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:09:21.071 [2024-11-20 10:26:21.137915] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.071 [2024-11-20 10:26:21.219390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.071 [2024-11-20 10:26:21.260135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.071 [2024-11-20 10:26:21.260174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.071 [2024-11-20 10:26:21.260182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.071 [2024-11-20 10:26:21.260188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.071 [2024-11-20 10:26:21.260193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.071 [2024-11-20 10:26:21.261708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:21.071 [2024-11-20 10:26:21.261816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:21.071 [2024-11-20 10:26:21.261902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.071 [2024-11-20 10:26:21.261903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:21.330 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.330 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:21.330 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.330 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.330 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.330 [2024-11-20 10:26:22.029017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.330 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.588 Malloc0 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.588 [2024-11-20 10:26:22.092195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:21.588 { 00:09:21.588 "params": { 00:09:21.588 "name": "Nvme$subsystem", 00:09:21.588 "trtype": "$TEST_TRANSPORT", 00:09:21.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.588 "adrfam": "ipv4", 00:09:21.588 "trsvcid": "$NVMF_PORT", 00:09:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.588 "hdgst": ${hdgst:-false}, 00:09:21.588 "ddgst": ${ddgst:-false} 00:09:21.588 }, 00:09:21.588 "method": "bdev_nvme_attach_controller" 00:09:21.588 } 00:09:21.588 EOF 00:09:21.588 )") 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:21.588 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:21.588 "params": { 00:09:21.588 "name": "Nvme1", 00:09:21.588 "trtype": "tcp", 00:09:21.588 "traddr": "10.0.0.2", 00:09:21.588 "adrfam": "ipv4", 00:09:21.588 "trsvcid": "4420", 00:09:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.588 "hdgst": false, 00:09:21.588 "ddgst": false 00:09:21.588 }, 00:09:21.588 "method": "bdev_nvme_attach_controller" 00:09:21.588 }' 00:09:21.588 [2024-11-20 10:26:22.145311] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:09:21.588 [2024-11-20 10:26:22.145357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379348 ] 00:09:21.588 [2024-11-20 10:26:22.222080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.588 [2024-11-20 10:26:22.266405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.588 [2024-11-20 10:26:22.266516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.588 [2024-11-20 10:26:22.266517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.846 I/O targets: 00:09:21.847 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:21.847 00:09:21.847 00:09:21.847 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.847 http://cunit.sourceforge.net/ 00:09:21.847 00:09:21.847 00:09:21.847 Suite: bdevio tests on: Nvme1n1 00:09:21.847 Test: blockdev write read block ...passed 00:09:21.847 Test: blockdev write zeroes read block ...passed 00:09:21.847 Test: blockdev write zeroes read no split ...passed 00:09:21.847 Test: blockdev write zeroes read split ...passed 00:09:21.847 Test: blockdev write zeroes read split partial ...passed 00:09:21.847 Test: blockdev reset ...[2024-11-20 10:26:22.575203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:21.847 [2024-11-20 10:26:22.575265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd1340 (9): Bad file descriptor 00:09:22.105 [2024-11-20 10:26:22.589809] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:22.105 passed 00:09:22.105 Test: blockdev write read 8 blocks ...passed 00:09:22.105 Test: blockdev write read size > 128k ...passed 00:09:22.105 Test: blockdev write read invalid size ...passed 00:09:22.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:22.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:22.105 Test: blockdev write read max offset ...passed 00:09:22.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:22.105 Test: blockdev writev readv 8 blocks ...passed 00:09:22.105 Test: blockdev writev readv 30 x 1block ...passed 00:09:22.105 Test: blockdev writev readv block ...passed 00:09:22.364 Test: blockdev writev readv size > 128k ...passed 00:09:22.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:22.364 Test: blockdev comparev and writev ...[2024-11-20 10:26:22.844919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.844951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.844966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.844974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.845203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.845214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.845226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.845233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.845460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.845471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.845483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.845491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.845736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.845747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.845760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:22.364 [2024-11-20 10:26:22.845767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:22.364 passed 00:09:22.364 Test: blockdev nvme passthru rw ...passed 00:09:22.364 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:26:22.928289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:22.364 [2024-11-20 10:26:22.928310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.928418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:22.364 [2024-11-20 10:26:22.928429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.928527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:22.364 [2024-11-20 10:26:22.928537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:22.364 [2024-11-20 10:26:22.928636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:22.364 [2024-11-20 10:26:22.928647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:22.364 passed 00:09:22.364 Test: blockdev nvme admin passthru ...passed 00:09:22.364 Test: blockdev copy ...passed 00:09:22.364 00:09:22.364 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.364 suites 1 1 n/a 0 0 00:09:22.364 tests 23 23 23 0 0 00:09:22.364 asserts 152 152 152 0 n/a 00:09:22.364 00:09:22.364 Elapsed time = 1.061 seconds 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.623 rmmod nvme_tcp 00:09:22.623 rmmod nvme_fabrics 00:09:22.623 rmmod nvme_keyring 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3379222 ']' 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3379222 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3379222 ']' 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3379222 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3379222 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3379222' 00:09:22.623 killing process with pid 3379222 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3379222 00:09:22.623 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3379222 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.882 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.787 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.787 00:09:24.787 real 0m10.623s 00:09:24.787 user 0m12.592s 00:09:24.787 sys 0m4.994s 00:09:24.787 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.787 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.787 ************************************ 00:09:24.787 END TEST nvmf_bdevio 00:09:24.787 ************************************ 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:25.047 00:09:25.047 real 4m37.489s 00:09:25.047 user 10m25.693s 00:09:25.047 sys 1m38.139s 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.047 ************************************ 00:09:25.047 END TEST nvmf_target_core 00:09:25.047 ************************************ 00:09:25.047 10:26:25 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:25.047 10:26:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.047 10:26:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.047 10:26:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.047 ************************************ 00:09:25.047 START TEST nvmf_target_extra 00:09:25.047 ************************************ 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:25.047 * Looking for test storage... 00:09:25.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.047 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.307 --rc genhtml_branch_coverage=1 00:09:25.307 --rc genhtml_function_coverage=1 00:09:25.307 --rc genhtml_legend=1 00:09:25.307 --rc geninfo_all_blocks=1 00:09:25.307 --rc geninfo_unexecuted_blocks=1 00:09:25.307 00:09:25.307 ' 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.307 --rc genhtml_branch_coverage=1 00:09:25.307 --rc genhtml_function_coverage=1 00:09:25.307 --rc genhtml_legend=1 00:09:25.307 --rc geninfo_all_blocks=1 00:09:25.307 --rc geninfo_unexecuted_blocks=1 00:09:25.307 00:09:25.307 ' 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.307 --rc genhtml_branch_coverage=1 00:09:25.307 --rc genhtml_function_coverage=1 00:09:25.307 --rc genhtml_legend=1 00:09:25.307 --rc geninfo_all_blocks=1 00:09:25.307 --rc geninfo_unexecuted_blocks=1 00:09:25.307 00:09:25.307 ' 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.307 --rc genhtml_branch_coverage=1 00:09:25.307 --rc genhtml_function_coverage=1 00:09:25.307 --rc genhtml_legend=1 00:09:25.307 --rc geninfo_all_blocks=1 00:09:25.307 --rc geninfo_unexecuted_blocks=1 00:09:25.307 00:09:25.307 ' 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.307 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:25.308 ************************************ 00:09:25.308 START TEST nvmf_example 00:09:25.308 ************************************ 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:25.308 * Looking for test storage... 00:09:25.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.308 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.308 --rc genhtml_branch_coverage=1 00:09:25.308 --rc genhtml_function_coverage=1 00:09:25.308 --rc genhtml_legend=1 00:09:25.308 --rc geninfo_all_blocks=1 00:09:25.308 --rc geninfo_unexecuted_blocks=1 00:09:25.308 00:09:25.308 ' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.308 --rc genhtml_branch_coverage=1 00:09:25.308 --rc genhtml_function_coverage=1 00:09:25.308 --rc genhtml_legend=1 00:09:25.308 --rc geninfo_all_blocks=1 00:09:25.308 --rc geninfo_unexecuted_blocks=1 00:09:25.308 00:09:25.308 ' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.308 --rc genhtml_branch_coverage=1 00:09:25.308 --rc genhtml_function_coverage=1 00:09:25.308 --rc genhtml_legend=1 00:09:25.308 --rc geninfo_all_blocks=1 00:09:25.308 --rc geninfo_unexecuted_blocks=1 00:09:25.308 00:09:25.308 ' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.308 --rc genhtml_branch_coverage=1 00:09:25.308 --rc genhtml_function_coverage=1 00:09:25.308 --rc genhtml_legend=1 00:09:25.308 --rc geninfo_all_blocks=1 00:09:25.308 --rc geninfo_unexecuted_blocks=1 00:09:25.308 00:09:25.308 ' 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.308 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.569 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:32.173 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.174 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.174 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.174 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:09:32.174 00:09:32.174 --- 10.0.0.2 ping statistics --- 00:09:32.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.174 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:09:32.174 00:09:32.174 --- 10.0.0.1 ping statistics --- 00:09:32.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.174 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3383284 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:32.174 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3383284 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3383284 ']' 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.175 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.433 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:32.433 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:32.433 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.433 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.433 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.434 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.434 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.434 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.434 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.434 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:32.434 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:44.644 Initializing NVMe Controllers 00:09:44.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:44.644 Initialization complete. Launching workers. 00:09:44.644 ======================================================== 00:09:44.644 Latency(us) 00:09:44.644 Device Information : IOPS MiB/s Average min max 00:09:44.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18114.53 70.76 3532.66 522.63 15434.91 00:09:44.644 ======================================================== 00:09:44.644 Total : 18114.53 70.76 3532.66 522.63 15434.91 00:09:44.644 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.644 rmmod nvme_tcp 00:09:44.644 rmmod nvme_fabrics 00:09:44.644 rmmod nvme_keyring 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3383284 ']' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3383284 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3383284 ']' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3383284 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383284 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383284' 00:09:44.644 killing process with pid 3383284 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3383284 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3383284 00:09:44.644 nvmf threads initialize successfully 00:09:44.644 bdev subsystem init successfully 00:09:44.644 created a nvmf target service 00:09:44.644 create targets's poll groups done 00:09:44.644 all subsystems of target started 00:09:44.644 nvmf target is running 00:09:44.644 all subsystems of target stopped 00:09:44.644 destroy targets's poll groups done 00:09:44.644 destroyed the nvmf target service 00:09:44.644 bdev subsystem finish successfully 00:09:44.644 nvmf threads destroy successfully 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.644 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.903 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.903 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:44.903 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.903 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 00:09:45.162 real 0m19.791s 00:09:45.162 user 0m45.818s 00:09:45.162 sys 0m6.110s 00:09:45.162 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.162 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 ************************************ 00:09:45.162 END TEST nvmf_example 00:09:45.162 ************************************ 00:09:45.162 10:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:45.163 ************************************ 00:09:45.163 START TEST nvmf_filesystem 00:09:45.163 ************************************ 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:45.163 * Looking for test storage... 00:09:45.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.163 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.426 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.426 --rc genhtml_branch_coverage=1 00:09:45.426 --rc genhtml_function_coverage=1 00:09:45.426 --rc genhtml_legend=1 00:09:45.426 --rc geninfo_all_blocks=1 00:09:45.426 --rc geninfo_unexecuted_blocks=1 00:09:45.426 00:09:45.426 ' 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.427 --rc genhtml_branch_coverage=1 00:09:45.427 --rc genhtml_function_coverage=1 00:09:45.427 --rc genhtml_legend=1 00:09:45.427 --rc geninfo_all_blocks=1 00:09:45.427 --rc geninfo_unexecuted_blocks=1 00:09:45.427 00:09:45.427 ' 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.427 --rc genhtml_branch_coverage=1 00:09:45.427 --rc genhtml_function_coverage=1 00:09:45.427 --rc genhtml_legend=1 00:09:45.427 --rc geninfo_all_blocks=1 00:09:45.427 --rc geninfo_unexecuted_blocks=1 00:09:45.427 00:09:45.427 ' 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.427 --rc genhtml_branch_coverage=1 00:09:45.427 --rc genhtml_function_coverage=1 00:09:45.427 --rc genhtml_legend=1 00:09:45.427 --rc geninfo_all_blocks=1 00:09:45.427 --rc geninfo_unexecuted_blocks=1 00:09:45.427 00:09:45.427 ' 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:45.427 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:45.428 #define SPDK_CONFIG_H 00:09:45.428 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:45.428 #define SPDK_CONFIG_APPS 1 00:09:45.428 #define SPDK_CONFIG_ARCH native 00:09:45.428 #undef SPDK_CONFIG_ASAN 00:09:45.428 #undef SPDK_CONFIG_AVAHI 00:09:45.428 #undef SPDK_CONFIG_CET 00:09:45.428 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:45.428 #define SPDK_CONFIG_COVERAGE 1 00:09:45.428 #define SPDK_CONFIG_CROSS_PREFIX 00:09:45.428 #undef SPDK_CONFIG_CRYPTO 00:09:45.428 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:45.428 #undef SPDK_CONFIG_CUSTOMOCF 00:09:45.428 #undef SPDK_CONFIG_DAOS 00:09:45.428 #define SPDK_CONFIG_DAOS_DIR 00:09:45.428 #define SPDK_CONFIG_DEBUG 1 00:09:45.428 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:45.428 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:45.428 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:45.428 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:45.428 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:45.428 #undef SPDK_CONFIG_DPDK_UADK 00:09:45.428 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:45.428 #define SPDK_CONFIG_EXAMPLES 1 00:09:45.428 #undef SPDK_CONFIG_FC 00:09:45.428 #define SPDK_CONFIG_FC_PATH 00:09:45.428 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:45.428 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:45.428 #define SPDK_CONFIG_FSDEV 1 00:09:45.428 #undef SPDK_CONFIG_FUSE 00:09:45.428 #undef SPDK_CONFIG_FUZZER 00:09:45.428 #define SPDK_CONFIG_FUZZER_LIB 00:09:45.428 #undef SPDK_CONFIG_GOLANG 00:09:45.428 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:45.428 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:45.428 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:45.428 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:45.428 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:45.428 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:45.428 #undef SPDK_CONFIG_HAVE_LZ4 00:09:45.428 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:45.428 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:45.428 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:45.428 #define SPDK_CONFIG_IDXD 1 00:09:45.428 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:45.428 #undef SPDK_CONFIG_IPSEC_MB 00:09:45.428 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:45.428 #define SPDK_CONFIG_ISAL 1 00:09:45.428 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:45.428 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:45.428 #define SPDK_CONFIG_LIBDIR 00:09:45.428 #undef SPDK_CONFIG_LTO 00:09:45.428 #define SPDK_CONFIG_MAX_LCORES 128 00:09:45.428 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:45.428 #define SPDK_CONFIG_NVME_CUSE 1 00:09:45.428 #undef SPDK_CONFIG_OCF 00:09:45.428 #define SPDK_CONFIG_OCF_PATH 00:09:45.428 #define SPDK_CONFIG_OPENSSL_PATH 00:09:45.428 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:45.428 #define SPDK_CONFIG_PGO_DIR 00:09:45.428 #undef SPDK_CONFIG_PGO_USE 00:09:45.428 #define SPDK_CONFIG_PREFIX /usr/local 00:09:45.428 #undef SPDK_CONFIG_RAID5F 00:09:45.428 #undef SPDK_CONFIG_RBD 00:09:45.428 #define SPDK_CONFIG_RDMA 1 00:09:45.428 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:45.428 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:45.428 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:45.428 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:45.428 #define SPDK_CONFIG_SHARED 1 00:09:45.428 #undef SPDK_CONFIG_SMA 00:09:45.428 #define SPDK_CONFIG_TESTS 1 00:09:45.428 #undef SPDK_CONFIG_TSAN 00:09:45.428 #define SPDK_CONFIG_UBLK 1 00:09:45.428 #define SPDK_CONFIG_UBSAN 1 00:09:45.428 #undef SPDK_CONFIG_UNIT_TESTS 00:09:45.428 #undef SPDK_CONFIG_URING 00:09:45.428 #define SPDK_CONFIG_URING_PATH 00:09:45.428 #undef SPDK_CONFIG_URING_ZNS 00:09:45.428 #undef SPDK_CONFIG_USDT 00:09:45.428 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:45.428 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:45.428 #define SPDK_CONFIG_VFIO_USER 1 00:09:45.428 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:45.428 #define SPDK_CONFIG_VHOST 1 00:09:45.428 #define SPDK_CONFIG_VIRTIO 1 00:09:45.428 #undef SPDK_CONFIG_VTUNE 00:09:45.428 #define SPDK_CONFIG_VTUNE_DIR 00:09:45.428 #define SPDK_CONFIG_WERROR 1 00:09:45.428 #define SPDK_CONFIG_WPDK_DIR 00:09:45.428 #undef SPDK_CONFIG_XNVME 00:09:45.428 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:45.428 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:45.429 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:45.430 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:45.431 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3385552 ]] 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3385552 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.twFu1J 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.twFu1J/tests/target /tmp/spdk.twFu1J 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189236944896 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6727016448 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981427712 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=552960 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:45.431 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:45.432 * Looking for test storage... 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189236944896 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8941608960 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.432 --rc genhtml_branch_coverage=1 00:09:45.432 --rc genhtml_function_coverage=1 00:09:45.432 --rc genhtml_legend=1 00:09:45.432 --rc geninfo_all_blocks=1 00:09:45.432 --rc geninfo_unexecuted_blocks=1 00:09:45.432 00:09:45.432 ' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.432 --rc genhtml_branch_coverage=1 00:09:45.432 --rc genhtml_function_coverage=1 00:09:45.432 --rc genhtml_legend=1 00:09:45.432 --rc geninfo_all_blocks=1 00:09:45.432 --rc geninfo_unexecuted_blocks=1 00:09:45.432 00:09:45.432 ' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.432 --rc genhtml_branch_coverage=1 00:09:45.432 --rc genhtml_function_coverage=1 00:09:45.432 --rc genhtml_legend=1 00:09:45.432 --rc geninfo_all_blocks=1 00:09:45.432 --rc geninfo_unexecuted_blocks=1 00:09:45.432 00:09:45.432 ' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.432 --rc genhtml_branch_coverage=1 00:09:45.432 --rc genhtml_function_coverage=1 00:09:45.432 --rc genhtml_legend=1 00:09:45.432 --rc geninfo_all_blocks=1 00:09:45.432 --rc geninfo_unexecuted_blocks=1 00:09:45.432 00:09:45.432 ' 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.432 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.693 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.694 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:52.319 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.319 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.319 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.319 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:52.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:52.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:52.320 Found net devices under 0000:86:00.0: cvl_0_0 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:52.320 Found net devices under 0000:86:00.1: cvl_0_1 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.320 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:09:52.320 00:09:52.320 --- 10.0.0.2 ping statistics --- 00:09:52.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.320 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:09:52.320 00:09:52.320 --- 10.0.0.1 ping statistics --- 00:09:52.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.320 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:52.320 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 ************************************ 00:09:52.321 START TEST nvmf_filesystem_no_in_capsule 00:09:52.321 ************************************ 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3388732 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3388732 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3388732 ']' 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.321 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 [2024-11-20 10:26:52.277991] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:09:52.321 [2024-11-20 10:26:52.278034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.321 [2024-11-20 10:26:52.357969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.321 [2024-11-20 10:26:52.400748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.321 [2024-11-20 10:26:52.400787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.321 [2024-11-20 10:26:52.400795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.321 [2024-11-20 10:26:52.400800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.321 [2024-11-20 10:26:52.400805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.321 [2024-11-20 10:26:52.402288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.321 [2024-11-20 10:26:52.402396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.321 [2024-11-20 10:26:52.402508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.321 [2024-11-20 10:26:52.402508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.581 [2024-11-20 10:26:53.170320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.581 Malloc1 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.581 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.841 [2024-11-20 10:26:53.314811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:52.841 { 00:09:52.841 "name": "Malloc1", 00:09:52.841 "aliases": [ 00:09:52.841 "ef51599d-690d-4737-ba5f-59aa1fdf7ab5" 00:09:52.841 ], 00:09:52.841 "product_name": "Malloc disk", 00:09:52.841 "block_size": 512, 00:09:52.841 "num_blocks": 1048576, 00:09:52.841 "uuid": "ef51599d-690d-4737-ba5f-59aa1fdf7ab5", 00:09:52.841 "assigned_rate_limits": { 00:09:52.841 "rw_ios_per_sec": 0, 00:09:52.841 "rw_mbytes_per_sec": 0, 00:09:52.841 "r_mbytes_per_sec": 0, 00:09:52.841 "w_mbytes_per_sec": 0 00:09:52.841 }, 00:09:52.841 "claimed": true, 00:09:52.841 "claim_type": "exclusive_write", 00:09:52.841 "zoned": false, 00:09:52.841 "supported_io_types": { 00:09:52.841 "read": true, 00:09:52.841 "write": true, 00:09:52.841 "unmap": true, 00:09:52.841 "flush": true, 00:09:52.841 "reset": true, 00:09:52.841 "nvme_admin": false, 00:09:52.841 "nvme_io": false, 00:09:52.841 "nvme_io_md": false, 00:09:52.841 "write_zeroes": true, 00:09:52.841 "zcopy": true, 00:09:52.841 "get_zone_info": false, 00:09:52.841 "zone_management": false, 00:09:52.841 "zone_append": false, 00:09:52.841 "compare": false, 00:09:52.841 "compare_and_write": false, 00:09:52.841 "abort": true, 00:09:52.841 "seek_hole": false, 00:09:52.841 "seek_data": false, 00:09:52.841 "copy": true, 00:09:52.841 "nvme_iov_md": false 00:09:52.841 }, 00:09:52.841 "memory_domains": [ 00:09:52.841 { 00:09:52.841 "dma_device_id": "system", 00:09:52.841 "dma_device_type": 1 00:09:52.841 }, 00:09:52.841 { 00:09:52.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.841 "dma_device_type": 2 00:09:52.841 } 00:09:52.841 ], 00:09:52.841 "driver_specific": {} 00:09:52.841 } 00:09:52.841 ]' 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:52.841 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.219 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.220 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:54.220 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.220 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:54.220 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:56.120 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:56.378 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:56.636 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.018 ************************************ 00:09:58.018 START TEST filesystem_ext4 00:09:58.018 ************************************ 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:58.018 mke2fs 1.47.0 (5-Feb-2023) 00:09:58.018 Discarding device blocks: 0/522240 done 00:09:58.018 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:58.018 Filesystem UUID: 1a80e2de-64d2-4b63-93cb-f14346fe1c3b 00:09:58.018 Superblock backups stored on blocks: 00:09:58.018 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:58.018 00:09:58.018 Allocating group tables: 0/64 done 00:09:58.018 Writing inode tables: 0/64 done 00:09:58.018 Creating journal (8192 blocks): done 00:09:58.018 Writing superblocks and filesystem accounting information: 0/64 done 00:09:58.018 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:58.018 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3388732 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.580 00:10:04.580 real 0m6.242s 00:10:04.580 user 0m0.029s 00:10:04.580 sys 0m0.068s 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:04.580 ************************************ 00:10:04.580 END TEST filesystem_ext4 00:10:04.580 ************************************ 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.580 ************************************ 00:10:04.580 START TEST filesystem_btrfs 00:10:04.580 ************************************ 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:04.580 btrfs-progs v6.8.1 00:10:04.580 See https://btrfs.readthedocs.io for more information. 00:10:04.580 00:10:04.580 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:04.580 NOTE: several default settings have changed in version 5.15, please make sure 00:10:04.580 this does not affect your deployments: 00:10:04.580 - DUP for metadata (-m dup) 00:10:04.580 - enabled no-holes (-O no-holes) 00:10:04.580 - enabled free-space-tree (-R free-space-tree) 00:10:04.580 00:10:04.580 Label: (null) 00:10:04.580 UUID: df4fef79-0e78-42f7-b842-712798796075 00:10:04.580 Node size: 16384 00:10:04.580 Sector size: 4096 (CPU page size: 4096) 00:10:04.580 Filesystem size: 510.00MiB 00:10:04.580 Block group profiles: 00:10:04.580 Data: single 8.00MiB 00:10:04.580 Metadata: DUP 32.00MiB 00:10:04.580 System: DUP 8.00MiB 00:10:04.580 SSD detected: yes 00:10:04.580 Zoned device: no 00:10:04.580 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:04.580 Checksum: crc32c 00:10:04.580 Number of devices: 1 00:10:04.580 Devices: 00:10:04.580 ID SIZE PATH 00:10:04.580 1 510.00MiB /dev/nvme0n1p1 00:10:04.580 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:04.580 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3388732 00:10:05.516 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:05.517 00:10:05.517 real 0m1.249s 00:10:05.517 user 0m0.019s 00:10:05.517 sys 0m0.122s 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:05.517 ************************************ 00:10:05.517 END TEST filesystem_btrfs 00:10:05.517 ************************************ 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.517 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.517 ************************************ 00:10:05.517 START TEST filesystem_xfs 00:10:05.517 ************************************ 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:05.517 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:05.517 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:05.517 = sectsz=512 attr=2, projid32bit=1 00:10:05.517 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:05.517 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:05.517 data = bsize=4096 blocks=130560, imaxpct=25 00:10:05.517 = sunit=0 swidth=0 blks 00:10:05.517 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:05.517 log =internal log bsize=4096 blocks=16384, version=2 00:10:05.517 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:05.517 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:06.451 Discarding blocks...Done. 00:10:06.451 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:06.451 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3388732 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:08.980 00:10:08.980 real 0m3.516s 00:10:08.980 user 0m0.025s 00:10:08.980 sys 0m0.075s 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:08.980 ************************************ 00:10:08.980 END TEST filesystem_xfs 00:10:08.980 ************************************ 00:10:08.980 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3388732 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3388732 ']' 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3388732 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3388732 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3388732' 00:10:09.239 killing process with pid 3388732 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3388732 00:10:09.239 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3388732 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:09.807 00:10:09.807 real 0m18.017s 00:10:09.807 user 1m11.058s 00:10:09.807 sys 0m1.444s 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.807 ************************************ 00:10:09.807 END TEST nvmf_filesystem_no_in_capsule 00:10:09.807 ************************************ 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.807 ************************************ 00:10:09.807 START TEST nvmf_filesystem_in_capsule 00:10:09.807 ************************************ 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3392483 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3392483 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3392483 ']' 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.807 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.807 [2024-11-20 10:27:10.366905] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:10:09.807 [2024-11-20 10:27:10.367036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.807 [2024-11-20 10:27:10.449746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.807 [2024-11-20 10:27:10.490045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.807 [2024-11-20 10:27:10.490085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.807 [2024-11-20 10:27:10.490092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.807 [2024-11-20 10:27:10.490098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.807 [2024-11-20 10:27:10.490104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.807 [2024-11-20 10:27:10.491722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.807 [2024-11-20 10:27:10.491826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.807 [2024-11-20 10:27:10.491932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.807 [2024-11-20 10:27:10.491933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.067 [2024-11-20 10:27:10.637602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.067 Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.067 [2024-11-20 10:27:10.785472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.067 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.369 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.369 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:10.369 { 00:10:10.369 "name": "Malloc1", 00:10:10.369 "aliases": [ 00:10:10.369 "57206993-13d8-4164-ae7f-51a6a118fefa" 00:10:10.369 ], 00:10:10.369 "product_name": "Malloc disk", 00:10:10.369 "block_size": 512, 00:10:10.369 "num_blocks": 1048576, 00:10:10.369 "uuid": "57206993-13d8-4164-ae7f-51a6a118fefa", 00:10:10.369 "assigned_rate_limits": { 00:10:10.369 "rw_ios_per_sec": 0, 00:10:10.369 "rw_mbytes_per_sec": 0, 00:10:10.369 "r_mbytes_per_sec": 0, 00:10:10.369 "w_mbytes_per_sec": 0 00:10:10.369 }, 00:10:10.369 "claimed": true, 00:10:10.369 "claim_type": "exclusive_write", 00:10:10.369 "zoned": false, 00:10:10.369 "supported_io_types": { 00:10:10.369 "read": true, 00:10:10.369 "write": true, 00:10:10.369 "unmap": true, 00:10:10.369 "flush": true, 00:10:10.369 "reset": true, 00:10:10.369 "nvme_admin": false, 00:10:10.369 "nvme_io": false, 00:10:10.369 "nvme_io_md": false, 00:10:10.369 "write_zeroes": true, 00:10:10.369 "zcopy": true, 00:10:10.369 "get_zone_info": false, 00:10:10.369 "zone_management": false, 00:10:10.369 "zone_append": false, 00:10:10.369 "compare": false, 00:10:10.369 "compare_and_write": false, 00:10:10.369 "abort": true, 00:10:10.369 "seek_hole": false, 00:10:10.369 "seek_data": false, 00:10:10.370 "copy": true, 00:10:10.370 "nvme_iov_md": false 00:10:10.370 }, 00:10:10.370 "memory_domains": [ 00:10:10.370 { 00:10:10.370 "dma_device_id": "system", 00:10:10.370 "dma_device_type": 1 00:10:10.370 }, 00:10:10.370 { 00:10:10.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.370 "dma_device_type": 2 00:10:10.370 } 00:10:10.370 ], 00:10:10.370 "driver_specific": {} 00:10:10.370 } 00:10:10.370 ]' 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:10.370 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.805 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.805 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.805 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.805 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.805 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:13.708 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:13.966 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:14.901 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.837 ************************************ 00:10:15.837 START TEST filesystem_in_capsule_ext4 00:10:15.837 ************************************ 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:15.837 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:15.837 mke2fs 1.47.0 (5-Feb-2023) 00:10:15.837 Discarding device blocks: 0/522240 done 00:10:15.837 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:15.837 Filesystem UUID: 4597a84a-7b7a-482d-be8b-1cd5a1d02520 00:10:15.837 Superblock backups stored on blocks: 00:10:15.837 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:15.837 00:10:15.837 Allocating group tables: 0/64 done 00:10:15.837 Writing inode tables: 0/64 done 00:10:17.740 Creating journal (8192 blocks): done 00:10:19.947 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:19.947 00:10:19.947 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:19.947 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3392483 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.512 00:10:26.512 real 0m10.372s 00:10:26.512 user 0m0.030s 00:10:26.512 sys 0m0.072s 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:26.512 ************************************ 00:10:26.512 END TEST filesystem_in_capsule_ext4 00:10:26.512 ************************************ 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.512 ************************************ 00:10:26.512 START TEST filesystem_in_capsule_btrfs 00:10:26.512 ************************************ 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.512 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:26.512 btrfs-progs v6.8.1 00:10:26.512 See https://btrfs.readthedocs.io for more information. 00:10:26.512 00:10:26.512 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:26.512 NOTE: several default settings have changed in version 5.15, please make sure 00:10:26.512 this does not affect your deployments: 00:10:26.512 - DUP for metadata (-m dup) 00:10:26.512 - enabled no-holes (-O no-holes) 00:10:26.512 - enabled free-space-tree (-R free-space-tree) 00:10:26.512 00:10:26.512 Label: (null) 00:10:26.512 UUID: 85d58721-3bc1-4d85-90cb-0d1e55d1e786 00:10:26.512 Node size: 16384 00:10:26.512 Sector size: 4096 (CPU page size: 4096) 00:10:26.512 Filesystem size: 510.00MiB 00:10:26.512 Block group profiles: 00:10:26.512 Data: single 8.00MiB 00:10:26.512 Metadata: DUP 32.00MiB 00:10:26.512 System: DUP 8.00MiB 00:10:26.512 SSD detected: yes 00:10:26.512 Zoned device: no 00:10:26.512 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:26.512 Checksum: crc32c 00:10:26.512 Number of devices: 1 00:10:26.512 Devices: 00:10:26.512 ID SIZE PATH 00:10:26.512 1 510.00MiB /dev/nvme0n1p1 00:10:26.512 00:10:26.512 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:26.512 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.447 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.447 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:27.447 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.448 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:27.448 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:27.448 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3392483 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.706 00:10:27.706 real 0m1.400s 00:10:27.706 user 0m0.024s 00:10:27.706 sys 0m0.121s 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.706 ************************************ 00:10:27.706 END TEST filesystem_in_capsule_btrfs 00:10:27.706 ************************************ 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.706 ************************************ 00:10:27.706 START TEST filesystem_in_capsule_xfs 00:10:27.706 ************************************ 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.706 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:27.706 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:27.706 = sectsz=512 attr=2, projid32bit=1 00:10:27.706 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:27.706 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:27.706 data = bsize=4096 blocks=130560, imaxpct=25 00:10:27.706 = sunit=0 swidth=0 blks 00:10:27.706 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:27.706 log =internal log bsize=4096 blocks=16384, version=2 00:10:27.706 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:27.706 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:28.648 Discarding blocks...Done. 00:10:28.649 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.649 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3392483 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.566 00:10:30.566 real 0m2.882s 00:10:30.566 user 0m0.014s 00:10:30.566 sys 0m0.084s 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.566 ************************************ 00:10:30.566 END TEST filesystem_in_capsule_xfs 00:10:30.566 ************************************ 00:10:30.566 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:30.824 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:30.824 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.083 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3392483 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3392483 ']' 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3392483 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3392483 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3392483' 00:10:31.084 killing process with pid 3392483 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3392483 00:10:31.084 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3392483 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:31.343 00:10:31.343 real 0m21.697s 00:10:31.343 user 1m25.470s 00:10:31.343 sys 0m1.535s 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.343 ************************************ 00:10:31.343 END TEST nvmf_filesystem_in_capsule 00:10:31.343 ************************************ 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.343 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.343 rmmod nvme_tcp 00:10:31.343 rmmod nvme_fabrics 00:10:31.602 rmmod nvme_keyring 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.602 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.505 00:10:33.505 real 0m48.469s 00:10:33.505 user 2m38.559s 00:10:33.505 sys 0m7.743s 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.505 ************************************ 00:10:33.505 END TEST nvmf_filesystem 00:10:33.505 ************************************ 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.505 10:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.763 ************************************ 00:10:33.763 START TEST nvmf_target_discovery 00:10:33.763 ************************************ 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:33.763 * Looking for test storage... 00:10:33.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.763 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.764 --rc genhtml_branch_coverage=1 00:10:33.764 --rc genhtml_function_coverage=1 00:10:33.764 --rc genhtml_legend=1 00:10:33.764 --rc geninfo_all_blocks=1 00:10:33.764 --rc geninfo_unexecuted_blocks=1 00:10:33.764 00:10:33.764 ' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.764 --rc genhtml_branch_coverage=1 00:10:33.764 --rc genhtml_function_coverage=1 00:10:33.764 --rc genhtml_legend=1 00:10:33.764 --rc geninfo_all_blocks=1 00:10:33.764 --rc geninfo_unexecuted_blocks=1 00:10:33.764 00:10:33.764 ' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.764 --rc genhtml_branch_coverage=1 00:10:33.764 --rc genhtml_function_coverage=1 00:10:33.764 --rc genhtml_legend=1 00:10:33.764 --rc geninfo_all_blocks=1 00:10:33.764 --rc geninfo_unexecuted_blocks=1 00:10:33.764 00:10:33.764 ' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.764 --rc genhtml_branch_coverage=1 00:10:33.764 --rc genhtml_function_coverage=1 00:10:33.764 --rc genhtml_legend=1 00:10:33.764 --rc geninfo_all_blocks=1 00:10:33.764 --rc geninfo_unexecuted_blocks=1 00:10:33.764 00:10:33.764 ' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.764 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.335 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.336 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.336 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:10:40.336 00:10:40.336 --- 10.0.0.2 ping statistics --- 00:10:40.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.336 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:10:40.336 00:10:40.336 --- 10.0.0.1 ping statistics --- 00:10:40.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.336 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3399698 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.336 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3399698 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3399698 ']' 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 [2024-11-20 10:27:40.522008] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:10:40.337 [2024-11-20 10:27:40.522063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.337 [2024-11-20 10:27:40.600394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.337 [2024-11-20 10:27:40.643047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.337 [2024-11-20 10:27:40.643084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.337 [2024-11-20 10:27:40.643092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.337 [2024-11-20 10:27:40.643098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.337 [2024-11-20 10:27:40.643103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.337 [2024-11-20 10:27:40.644711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.337 [2024-11-20 10:27:40.644819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.337 [2024-11-20 10:27:40.644927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.337 [2024-11-20 10:27:40.644928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 [2024-11-20 10:27:40.790562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 Null1 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 [2024-11-20 10:27:40.836091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 Null2 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 Null3 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 Null4 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:40.337 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.338 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:40.598 00:10:40.598 Discovery Log Number of Records 6, Generation counter 6 00:10:40.598 =====Discovery Log Entry 0====== 00:10:40.598 trtype: tcp 00:10:40.598 adrfam: ipv4 00:10:40.598 subtype: current discovery subsystem 00:10:40.598 treq: not required 00:10:40.598 portid: 0 00:10:40.598 trsvcid: 4420 00:10:40.598 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.598 traddr: 10.0.0.2 00:10:40.598 eflags: explicit discovery connections, duplicate discovery information 00:10:40.598 sectype: none 00:10:40.598 =====Discovery Log Entry 1====== 00:10:40.598 trtype: tcp 00:10:40.598 adrfam: ipv4 00:10:40.598 subtype: nvme subsystem 00:10:40.598 treq: not required 00:10:40.598 portid: 0 00:10:40.598 trsvcid: 4420 00:10:40.598 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:40.598 traddr: 10.0.0.2 00:10:40.598 eflags: none 00:10:40.598 sectype: none 00:10:40.598 =====Discovery Log Entry 2====== 00:10:40.598 trtype: tcp 00:10:40.598 adrfam: ipv4 00:10:40.598 subtype: nvme subsystem 00:10:40.598 treq: not required 00:10:40.598 portid: 0 00:10:40.598 trsvcid: 4420 00:10:40.598 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:40.598 traddr: 10.0.0.2 00:10:40.598 eflags: none 00:10:40.598 sectype: none 00:10:40.598 =====Discovery Log Entry 3====== 00:10:40.598 trtype: tcp 00:10:40.598 adrfam: ipv4 00:10:40.598 subtype: nvme subsystem 00:10:40.598 treq: not required 00:10:40.598 portid: 0 00:10:40.598 trsvcid: 4420 00:10:40.598 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:40.598 traddr: 10.0.0.2 00:10:40.598 eflags: none 00:10:40.598 sectype: none 00:10:40.598 =====Discovery Log Entry 4====== 00:10:40.598 trtype: tcp 00:10:40.598 adrfam: ipv4 00:10:40.598 subtype: nvme subsystem 00:10:40.598 treq: not required 00:10:40.598 portid: 0 00:10:40.598 trsvcid: 4420 00:10:40.598 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:40.598 traddr: 10.0.0.2 00:10:40.598 eflags: none 00:10:40.598 sectype: none 00:10:40.598 =====Discovery Log Entry 5====== 00:10:40.598 trtype: tcp 00:10:40.598 adrfam: ipv4 00:10:40.598 subtype: discovery subsystem referral 00:10:40.598 treq: not required 00:10:40.598 portid: 0 00:10:40.598 trsvcid: 4430 00:10:40.598 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.598 traddr: 10.0.0.2 00:10:40.598 eflags: none 00:10:40.598 sectype: none 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:40.598 Perform nvmf subsystem discovery via RPC 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.598 [ 00:10:40.598 { 00:10:40.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:40.598 "subtype": "Discovery", 00:10:40.598 "listen_addresses": [ 00:10:40.598 { 00:10:40.598 "trtype": "TCP", 00:10:40.598 "adrfam": "IPv4", 00:10:40.598 "traddr": "10.0.0.2", 00:10:40.598 "trsvcid": "4420" 00:10:40.598 } 00:10:40.598 ], 00:10:40.598 "allow_any_host": true, 00:10:40.598 "hosts": [] 00:10:40.598 }, 00:10:40.598 { 00:10:40.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.598 "subtype": "NVMe", 00:10:40.598 "listen_addresses": [ 00:10:40.598 { 00:10:40.598 "trtype": "TCP", 00:10:40.598 "adrfam": "IPv4", 00:10:40.598 "traddr": "10.0.0.2", 00:10:40.598 "trsvcid": "4420" 00:10:40.598 } 00:10:40.598 ], 00:10:40.598 "allow_any_host": true, 00:10:40.598 "hosts": [], 00:10:40.598 "serial_number": "SPDK00000000000001", 00:10:40.598 "model_number": "SPDK bdev Controller", 00:10:40.598 "max_namespaces": 32, 00:10:40.598 "min_cntlid": 1, 00:10:40.598 "max_cntlid": 65519, 00:10:40.598 "namespaces": [ 00:10:40.598 { 00:10:40.598 "nsid": 1, 00:10:40.598 "bdev_name": "Null1", 00:10:40.598 "name": "Null1", 00:10:40.598 "nguid": "9FAD24BBA802478EABAB94A76AEC62B7", 00:10:40.598 "uuid": "9fad24bb-a802-478e-abab-94a76aec62b7" 00:10:40.598 } 00:10:40.598 ] 00:10:40.598 }, 00:10:40.598 { 00:10:40.598 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.598 "subtype": "NVMe", 00:10:40.598 "listen_addresses": [ 00:10:40.598 { 00:10:40.598 "trtype": "TCP", 00:10:40.598 "adrfam": "IPv4", 00:10:40.598 "traddr": "10.0.0.2", 00:10:40.598 "trsvcid": "4420" 00:10:40.598 } 00:10:40.598 ], 00:10:40.598 "allow_any_host": true, 00:10:40.598 "hosts": [], 00:10:40.598 "serial_number": "SPDK00000000000002", 00:10:40.598 "model_number": "SPDK bdev Controller", 00:10:40.598 "max_namespaces": 32, 00:10:40.598 "min_cntlid": 1, 00:10:40.598 "max_cntlid": 65519, 00:10:40.598 "namespaces": [ 00:10:40.598 { 00:10:40.598 "nsid": 1, 00:10:40.598 "bdev_name": "Null2", 00:10:40.598 "name": "Null2", 00:10:40.598 "nguid": "8145144CF56049AC9D7C5F3F01DB6485", 00:10:40.598 "uuid": "8145144c-f560-49ac-9d7c-5f3f01db6485" 00:10:40.598 } 00:10:40.598 ] 00:10:40.598 }, 00:10:40.598 { 00:10:40.598 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:40.598 "subtype": "NVMe", 00:10:40.598 "listen_addresses": [ 00:10:40.598 { 00:10:40.598 "trtype": "TCP", 00:10:40.598 "adrfam": "IPv4", 00:10:40.598 "traddr": "10.0.0.2", 00:10:40.598 "trsvcid": "4420" 00:10:40.598 } 00:10:40.598 ], 00:10:40.598 "allow_any_host": true, 00:10:40.598 "hosts": [], 00:10:40.598 "serial_number": "SPDK00000000000003", 00:10:40.598 "model_number": "SPDK bdev Controller", 00:10:40.598 "max_namespaces": 32, 00:10:40.598 "min_cntlid": 1, 00:10:40.598 "max_cntlid": 65519, 00:10:40.598 "namespaces": [ 00:10:40.598 { 00:10:40.598 "nsid": 1, 00:10:40.598 "bdev_name": "Null3", 00:10:40.598 "name": "Null3", 00:10:40.598 "nguid": "780BA5E795B8441EBF8188FBBAD064CA", 00:10:40.598 "uuid": "780ba5e7-95b8-441e-bf81-88fbbad064ca" 00:10:40.598 } 00:10:40.598 ] 00:10:40.598 }, 00:10:40.598 { 00:10:40.598 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:40.598 "subtype": "NVMe", 00:10:40.598 "listen_addresses": [ 00:10:40.598 { 00:10:40.598 "trtype": "TCP", 00:10:40.598 "adrfam": "IPv4", 00:10:40.598 "traddr": "10.0.0.2", 00:10:40.598 "trsvcid": "4420" 00:10:40.598 } 00:10:40.598 ], 00:10:40.598 "allow_any_host": true, 00:10:40.598 "hosts": [], 00:10:40.598 "serial_number": "SPDK00000000000004", 00:10:40.598 "model_number": "SPDK bdev Controller", 00:10:40.598 "max_namespaces": 32, 00:10:40.598 "min_cntlid": 1, 00:10:40.598 "max_cntlid": 65519, 00:10:40.598 "namespaces": [ 00:10:40.598 { 00:10:40.598 "nsid": 1, 00:10:40.598 "bdev_name": "Null4", 00:10:40.598 "name": "Null4", 00:10:40.598 "nguid": "CA6F076375B94B8DA392A9FEE759061F", 00:10:40.598 "uuid": "ca6f0763-75b9-4b8d-a392-a9fee759061f" 00:10:40.598 } 00:10:40.598 ] 00:10:40.598 } 00:10:40.598 ] 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.598 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.599 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.599 rmmod nvme_tcp 00:10:40.599 rmmod nvme_fabrics 00:10:40.857 rmmod nvme_keyring 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3399698 ']' 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3399698 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3399698 ']' 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3399698 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399698 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399698' 00:10:40.857 killing process with pid 3399698 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3399698 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3399698 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.857 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.115 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.115 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.115 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.115 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.020 00:10:43.020 real 0m9.389s 00:10:43.020 user 0m5.660s 00:10:43.020 sys 0m4.850s 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.020 ************************************ 00:10:43.020 END TEST nvmf_target_discovery 00:10:43.020 ************************************ 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.020 ************************************ 00:10:43.020 START TEST nvmf_referrals 00:10:43.020 ************************************ 00:10:43.020 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.280 * Looking for test storage... 00:10:43.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.280 --rc genhtml_branch_coverage=1 00:10:43.280 --rc genhtml_function_coverage=1 00:10:43.280 --rc genhtml_legend=1 00:10:43.280 --rc geninfo_all_blocks=1 00:10:43.280 --rc geninfo_unexecuted_blocks=1 00:10:43.280 00:10:43.280 ' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.280 --rc genhtml_branch_coverage=1 00:10:43.280 --rc genhtml_function_coverage=1 00:10:43.280 --rc genhtml_legend=1 00:10:43.280 --rc geninfo_all_blocks=1 00:10:43.280 --rc geninfo_unexecuted_blocks=1 00:10:43.280 00:10:43.280 ' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.280 --rc genhtml_branch_coverage=1 00:10:43.280 --rc genhtml_function_coverage=1 00:10:43.280 --rc genhtml_legend=1 00:10:43.280 --rc geninfo_all_blocks=1 00:10:43.280 --rc geninfo_unexecuted_blocks=1 00:10:43.280 00:10:43.280 ' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.280 --rc genhtml_branch_coverage=1 00:10:43.280 --rc genhtml_function_coverage=1 00:10:43.280 --rc genhtml_legend=1 00:10:43.280 --rc geninfo_all_blocks=1 00:10:43.280 --rc geninfo_unexecuted_blocks=1 00:10:43.280 00:10:43.280 ' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.280 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.281 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.843 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:49.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:49.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:49.844 Found net devices under 0000:86:00.0: cvl_0_0 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:49.844 Found net devices under 0000:86:00.1: cvl_0_1 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:10:49.844 00:10:49.844 --- 10.0.0.2 ping statistics --- 00:10:49.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.844 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:10:49.844 00:10:49.844 --- 10.0.0.1 ping statistics --- 00:10:49.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.844 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3403476 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3403476 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3403476 ']' 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.844 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.844 [2024-11-20 10:27:49.999853] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:10:49.844 [2024-11-20 10:27:49.999901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.844 [2024-11-20 10:27:50.083421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.844 [2024-11-20 10:27:50.125767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.844 [2024-11-20 10:27:50.125806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.845 [2024-11-20 10:27:50.125814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.845 [2024-11-20 10:27:50.125820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.845 [2024-11-20 10:27:50.125825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.845 [2024-11-20 10:27:50.127301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.845 [2024-11-20 10:27:50.127412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.845 [2024-11-20 10:27:50.127447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.845 [2024-11-20 10:27:50.127448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 [2024-11-20 10:27:50.890999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 [2024-11-20 10:27:50.904322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.412 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.671 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.930 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.188 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:51.189 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.189 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:51.189 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.189 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.189 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.446 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.446 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.447 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.706 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.964 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.964 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:51.964 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.965 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.223 rmmod nvme_tcp 00:10:52.223 rmmod nvme_fabrics 00:10:52.223 rmmod nvme_keyring 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3403476 ']' 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3403476 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3403476 ']' 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3403476 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3403476 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3403476' 00:10:52.223 killing process with pid 3403476 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3403476 00:10:52.223 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3403476 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.482 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:55.015 00:10:55.015 real 0m11.415s 00:10:55.015 user 0m14.475s 00:10:55.015 sys 0m5.318s 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.015 ************************************ 00:10:55.015 END TEST nvmf_referrals 00:10:55.015 ************************************ 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.015 ************************************ 00:10:55.015 START TEST nvmf_connect_disconnect 00:10:55.015 ************************************ 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:55.015 * Looking for test storage... 00:10:55.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:55.015 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.016 --rc genhtml_branch_coverage=1 00:10:55.016 --rc genhtml_function_coverage=1 00:10:55.016 --rc genhtml_legend=1 00:10:55.016 --rc geninfo_all_blocks=1 00:10:55.016 --rc geninfo_unexecuted_blocks=1 00:10:55.016 00:10:55.016 ' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.016 --rc genhtml_branch_coverage=1 00:10:55.016 --rc genhtml_function_coverage=1 00:10:55.016 --rc genhtml_legend=1 00:10:55.016 --rc geninfo_all_blocks=1 00:10:55.016 --rc geninfo_unexecuted_blocks=1 00:10:55.016 00:10:55.016 ' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.016 --rc genhtml_branch_coverage=1 00:10:55.016 --rc genhtml_function_coverage=1 00:10:55.016 --rc genhtml_legend=1 00:10:55.016 --rc geninfo_all_blocks=1 00:10:55.016 --rc geninfo_unexecuted_blocks=1 00:10:55.016 00:10:55.016 ' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.016 --rc genhtml_branch_coverage=1 00:10:55.016 --rc genhtml_function_coverage=1 00:10:55.016 --rc genhtml_legend=1 00:10:55.016 --rc geninfo_all_blocks=1 00:10:55.016 --rc geninfo_unexecuted_blocks=1 00:10:55.016 00:10:55.016 ' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.016 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.017 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.585 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.585 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.585 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.585 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.585 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:01.586 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:01.586 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:01.586 Found net devices under 0000:86:00.0: cvl_0_0 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:01.586 Found net devices under 0000:86:00.1: cvl_0_1 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:11:01.586 00:11:01.586 --- 10.0.0.2 ping statistics --- 00:11:01.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.586 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:11:01.586 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:01.586 00:11:01.587 --- 10.0.0.1 ping statistics --- 00:11:01.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.587 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3407560 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3407560 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3407560 ']' 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 [2024-11-20 10:28:01.488300] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:11:01.587 [2024-11-20 10:28:01.488353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.587 [2024-11-20 10:28:01.569499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.587 [2024-11-20 10:28:01.612594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.587 [2024-11-20 10:28:01.612634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.587 [2024-11-20 10:28:01.612641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.587 [2024-11-20 10:28:01.612650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.587 [2024-11-20 10:28:01.612655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.587 [2024-11-20 10:28:01.614104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.587 [2024-11-20 10:28:01.614215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.587 [2024-11-20 10:28:01.614319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.587 [2024-11-20 10:28:01.614320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 [2024-11-20 10:28:01.756174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 [2024-11-20 10:28:01.824160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:01.587 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:04.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.263 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.263 rmmod nvme_tcp 00:11:17.263 rmmod nvme_fabrics 00:11:17.522 rmmod nvme_keyring 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3407560 ']' 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3407560 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3407560 ']' 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3407560 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3407560 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3407560' 00:11:17.522 killing process with pid 3407560 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3407560 00:11:17.522 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3407560 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.782 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.774 00:11:19.774 real 0m25.114s 00:11:19.774 user 1m7.808s 00:11:19.774 sys 0m5.863s 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:19.774 ************************************ 00:11:19.774 END TEST nvmf_connect_disconnect 00:11:19.774 ************************************ 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.774 ************************************ 00:11:19.774 START TEST nvmf_multitarget 00:11:19.774 ************************************ 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.774 * Looking for test storage... 00:11:19.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.774 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.079 --rc genhtml_branch_coverage=1 00:11:20.079 --rc genhtml_function_coverage=1 00:11:20.079 --rc genhtml_legend=1 00:11:20.079 --rc geninfo_all_blocks=1 00:11:20.079 --rc geninfo_unexecuted_blocks=1 00:11:20.079 00:11:20.079 ' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.079 --rc genhtml_branch_coverage=1 00:11:20.079 --rc genhtml_function_coverage=1 00:11:20.079 --rc genhtml_legend=1 00:11:20.079 --rc geninfo_all_blocks=1 00:11:20.079 --rc geninfo_unexecuted_blocks=1 00:11:20.079 00:11:20.079 ' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.079 --rc genhtml_branch_coverage=1 00:11:20.079 --rc genhtml_function_coverage=1 00:11:20.079 --rc genhtml_legend=1 00:11:20.079 --rc geninfo_all_blocks=1 00:11:20.079 --rc geninfo_unexecuted_blocks=1 00:11:20.079 00:11:20.079 ' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.079 --rc genhtml_branch_coverage=1 00:11:20.079 --rc genhtml_function_coverage=1 00:11:20.079 --rc genhtml_legend=1 00:11:20.079 --rc geninfo_all_blocks=1 00:11:20.079 --rc geninfo_unexecuted_blocks=1 00:11:20.079 00:11:20.079 ' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.079 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.080 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.648 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.648 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.648 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.648 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.648 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:11:26.649 00:11:26.649 --- 10.0.0.2 ping statistics --- 00:11:26.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.649 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:26.649 00:11:26.649 --- 10.0.0.1 ping statistics --- 00:11:26.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.649 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3413954 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3413954 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3413954 ']' 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.649 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.649 [2024-11-20 10:28:26.662483] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:11:26.649 [2024-11-20 10:28:26.662528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.649 [2024-11-20 10:28:26.744031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.649 [2024-11-20 10:28:26.787047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.649 [2024-11-20 10:28:26.787082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.649 [2024-11-20 10:28:26.787090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.649 [2024-11-20 10:28:26.787096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.649 [2024-11-20 10:28:26.787101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.649 [2024-11-20 10:28:26.788520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.649 [2024-11-20 10:28:26.788545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.649 [2024-11-20 10:28:26.788636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.649 [2024-11-20 10:28:26.788637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.908 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:27.167 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:27.167 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:27.167 "nvmf_tgt_1" 00:11:27.167 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:27.167 "nvmf_tgt_2" 00:11:27.167 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:27.167 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:27.426 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:27.426 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:27.426 true 00:11:27.426 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:27.685 true 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.685 rmmod nvme_tcp 00:11:27.685 rmmod nvme_fabrics 00:11:27.685 rmmod nvme_keyring 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3413954 ']' 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3413954 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3413954 ']' 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3413954 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.685 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3413954 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3413954' 00:11:27.945 killing process with pid 3413954 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3413954 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3413954 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.945 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.479 00:11:30.479 real 0m10.254s 00:11:30.479 user 0m9.830s 00:11:30.479 sys 0m4.970s 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 ************************************ 00:11:30.479 END TEST nvmf_multitarget 00:11:30.479 ************************************ 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 ************************************ 00:11:30.479 START TEST nvmf_rpc 00:11:30.479 ************************************ 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:30.479 * Looking for test storage... 00:11:30.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.479 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.480 --rc genhtml_branch_coverage=1 00:11:30.480 --rc genhtml_function_coverage=1 00:11:30.480 --rc genhtml_legend=1 00:11:30.480 --rc geninfo_all_blocks=1 00:11:30.480 --rc geninfo_unexecuted_blocks=1 00:11:30.480 00:11:30.480 ' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.480 --rc genhtml_branch_coverage=1 00:11:30.480 --rc genhtml_function_coverage=1 00:11:30.480 --rc genhtml_legend=1 00:11:30.480 --rc geninfo_all_blocks=1 00:11:30.480 --rc geninfo_unexecuted_blocks=1 00:11:30.480 00:11:30.480 ' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.480 --rc genhtml_branch_coverage=1 00:11:30.480 --rc genhtml_function_coverage=1 00:11:30.480 --rc genhtml_legend=1 00:11:30.480 --rc geninfo_all_blocks=1 00:11:30.480 --rc geninfo_unexecuted_blocks=1 00:11:30.480 00:11:30.480 ' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.480 --rc genhtml_branch_coverage=1 00:11:30.480 --rc genhtml_function_coverage=1 00:11:30.480 --rc genhtml_legend=1 00:11:30.480 --rc geninfo_all_blocks=1 00:11:30.480 --rc geninfo_unexecuted_blocks=1 00:11:30.480 00:11:30.480 ' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.480 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:37.052 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.052 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:37.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:37.053 Found net devices under 0000:86:00.0: cvl_0_0 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:37.053 Found net devices under 0000:86:00.1: cvl_0_1 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:11:37.053 00:11:37.053 --- 10.0.0.2 ping statistics --- 00:11:37.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.053 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:11:37.053 00:11:37.053 --- 10.0.0.1 ping statistics --- 00:11:37.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.053 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3417752 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3417752 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3417752 ']' 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.053 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 [2024-11-20 10:28:36.960735] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:11:37.053 [2024-11-20 10:28:36.960780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.053 [2024-11-20 10:28:37.040961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.053 [2024-11-20 10:28:37.083583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.053 [2024-11-20 10:28:37.083619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.053 [2024-11-20 10:28:37.083626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.053 [2024-11-20 10:28:37.083632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.053 [2024-11-20 10:28:37.083637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.053 [2024-11-20 10:28:37.085119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.053 [2024-11-20 10:28:37.085233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.053 [2024-11-20 10:28:37.085340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.053 [2024-11-20 10:28:37.085341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:37.054 "tick_rate": 2300000000, 00:11:37.054 "poll_groups": [ 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_000", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [] 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_001", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [] 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_002", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [] 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_003", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [] 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 }' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 [2024-11-20 10:28:37.330149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:37.054 "tick_rate": 2300000000, 00:11:37.054 "poll_groups": [ 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_000", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [ 00:11:37.054 { 00:11:37.054 "trtype": "TCP" 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_001", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [ 00:11:37.054 { 00:11:37.054 "trtype": "TCP" 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_002", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [ 00:11:37.054 { 00:11:37.054 "trtype": "TCP" 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "name": "nvmf_tgt_poll_group_003", 00:11:37.054 "admin_qpairs": 0, 00:11:37.054 "io_qpairs": 0, 00:11:37.054 "current_admin_qpairs": 0, 00:11:37.054 "current_io_qpairs": 0, 00:11:37.054 "pending_bdev_io": 0, 00:11:37.054 "completed_nvme_io": 0, 00:11:37.054 "transports": [ 00:11:37.054 { 00:11:37.054 "trtype": "TCP" 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 }' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 Malloc1 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 [2024-11-20 10:28:37.516531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:37.054 [2024-11-20 10:28:37.545222] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:37.054 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:37.054 could not add new controller: failed to write to nvme-fabrics device 00:11:37.054 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.055 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.990 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.990 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.990 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.990 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.990 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:40.522 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.523 [2024-11-20 10:28:40.849763] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:40.523 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:40.523 could not add new controller: failed to write to nvme-fabrics device 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.523 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.458 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.458 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.458 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.458 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.458 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:43.361 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.619 [2024-11-20 10:28:44.300626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.619 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.997 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.997 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.997 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.997 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.997 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.900 [2024-11-20 10:28:47.612299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.900 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.159 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.159 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.105 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.105 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:48.105 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.105 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:48.105 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.637 [2024-11-20 10:28:50.970466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.638 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.638 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.575 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.575 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.575 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.575 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.575 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:53.477 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.736 [2024-11-20 10:28:54.362962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.736 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.737 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.111 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.111 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.111 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.111 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:55.111 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.014 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.015 [2024-11-20 10:28:57.659441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.015 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.390 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.390 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.390 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.390 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.390 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.293 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.293 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 [2024-11-20 10:29:01.025885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 [2024-11-20 10:29:01.074007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.552 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 [2024-11-20 10:29:01.122157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 [2024-11-20 10:29:01.170320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 [2024-11-20 10:29:01.218492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.553 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:00.553 "tick_rate": 2300000000, 00:12:00.553 "poll_groups": [ 00:12:00.553 { 00:12:00.553 "name": "nvmf_tgt_poll_group_000", 00:12:00.553 "admin_qpairs": 2, 00:12:00.553 "io_qpairs": 168, 00:12:00.553 "current_admin_qpairs": 0, 00:12:00.553 "current_io_qpairs": 0, 00:12:00.553 "pending_bdev_io": 0, 00:12:00.553 "completed_nvme_io": 219, 00:12:00.553 "transports": [ 00:12:00.553 { 00:12:00.553 "trtype": "TCP" 00:12:00.553 } 00:12:00.553 ] 00:12:00.553 }, 00:12:00.553 { 00:12:00.553 "name": "nvmf_tgt_poll_group_001", 00:12:00.553 "admin_qpairs": 2, 00:12:00.553 "io_qpairs": 168, 00:12:00.553 "current_admin_qpairs": 0, 00:12:00.553 "current_io_qpairs": 0, 00:12:00.553 "pending_bdev_io": 0, 00:12:00.553 "completed_nvme_io": 231, 00:12:00.553 "transports": [ 00:12:00.553 { 00:12:00.553 "trtype": "TCP" 00:12:00.553 } 00:12:00.553 ] 00:12:00.553 }, 00:12:00.553 { 00:12:00.553 "name": "nvmf_tgt_poll_group_002", 00:12:00.553 "admin_qpairs": 1, 00:12:00.553 "io_qpairs": 168, 00:12:00.553 "current_admin_qpairs": 0, 00:12:00.553 "current_io_qpairs": 0, 00:12:00.553 "pending_bdev_io": 0, 00:12:00.553 "completed_nvme_io": 319, 00:12:00.553 "transports": [ 00:12:00.553 { 00:12:00.553 "trtype": "TCP" 00:12:00.553 } 00:12:00.553 ] 00:12:00.553 }, 00:12:00.554 { 00:12:00.554 "name": "nvmf_tgt_poll_group_003", 00:12:00.554 "admin_qpairs": 2, 00:12:00.554 "io_qpairs": 168, 00:12:00.554 "current_admin_qpairs": 0, 00:12:00.554 "current_io_qpairs": 0, 00:12:00.554 "pending_bdev_io": 0, 00:12:00.554 "completed_nvme_io": 253, 00:12:00.554 "transports": [ 00:12:00.554 { 00:12:00.554 "trtype": "TCP" 00:12:00.554 } 00:12:00.554 ] 00:12:00.554 } 00:12:00.554 ] 00:12:00.554 }' 00:12:00.554 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:00.554 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:00.554 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:00.554 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.812 rmmod nvme_tcp 00:12:00.812 rmmod nvme_fabrics 00:12:00.812 rmmod nvme_keyring 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3417752 ']' 00:12:00.812 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3417752 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3417752 ']' 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3417752 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3417752 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3417752' 00:12:00.813 killing process with pid 3417752 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3417752 00:12:00.813 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3417752 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.077 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.610 00:12:03.610 real 0m32.998s 00:12:03.610 user 1m39.575s 00:12:03.610 sys 0m6.517s 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.610 ************************************ 00:12:03.610 END TEST nvmf_rpc 00:12:03.610 ************************************ 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.610 ************************************ 00:12:03.610 START TEST nvmf_invalid 00:12:03.610 ************************************ 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:03.610 * Looking for test storage... 00:12:03.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.610 --rc genhtml_branch_coverage=1 00:12:03.610 --rc genhtml_function_coverage=1 00:12:03.610 --rc genhtml_legend=1 00:12:03.610 --rc geninfo_all_blocks=1 00:12:03.610 --rc geninfo_unexecuted_blocks=1 00:12:03.610 00:12:03.610 ' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.610 --rc genhtml_branch_coverage=1 00:12:03.610 --rc genhtml_function_coverage=1 00:12:03.610 --rc genhtml_legend=1 00:12:03.610 --rc geninfo_all_blocks=1 00:12:03.610 --rc geninfo_unexecuted_blocks=1 00:12:03.610 00:12:03.610 ' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.610 --rc genhtml_branch_coverage=1 00:12:03.610 --rc genhtml_function_coverage=1 00:12:03.610 --rc genhtml_legend=1 00:12:03.610 --rc geninfo_all_blocks=1 00:12:03.610 --rc geninfo_unexecuted_blocks=1 00:12:03.610 00:12:03.610 ' 00:12:03.610 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.610 --rc genhtml_branch_coverage=1 00:12:03.611 --rc genhtml_function_coverage=1 00:12:03.611 --rc genhtml_legend=1 00:12:03.611 --rc geninfo_all_blocks=1 00:12:03.611 --rc geninfo_unexecuted_blocks=1 00:12:03.611 00:12:03.611 ' 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.611 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.611 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:10.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:10.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:10.180 Found net devices under 0000:86:00.0: cvl_0_0 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:10.180 Found net devices under 0000:86:00.1: cvl_0_1 00:12:10.180 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:12:10.181 00:12:10.181 --- 10.0.0.2 ping statistics --- 00:12:10.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.181 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:10.181 00:12:10.181 --- 10.0.0.1 ping statistics --- 00:12:10.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.181 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3425579 00:12:10.181 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3425579 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3425579 ']' 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:10.181 [2024-11-20 10:29:10.057705] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:12:10.181 [2024-11-20 10:29:10.057763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.181 [2024-11-20 10:29:10.139490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.181 [2024-11-20 10:29:10.182825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.181 [2024-11-20 10:29:10.182864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.181 [2024-11-20 10:29:10.182871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.181 [2024-11-20 10:29:10.182877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.181 [2024-11-20 10:29:10.182882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.181 [2024-11-20 10:29:10.184411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.181 [2024-11-20 10:29:10.184430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.181 [2024-11-20 10:29:10.184519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.181 [2024-11-20 10:29:10.184520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9105 00:12:10.181 [2024-11-20 10:29:10.502862] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:10.181 { 00:12:10.181 "nqn": "nqn.2016-06.io.spdk:cnode9105", 00:12:10.181 "tgt_name": "foobar", 00:12:10.181 "method": "nvmf_create_subsystem", 00:12:10.181 "req_id": 1 00:12:10.181 } 00:12:10.181 Got JSON-RPC error response 00:12:10.181 response: 00:12:10.181 { 00:12:10.181 "code": -32603, 00:12:10.181 "message": "Unable to find target foobar" 00:12:10.181 }' 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:10.181 { 00:12:10.181 "nqn": "nqn.2016-06.io.spdk:cnode9105", 00:12:10.181 "tgt_name": "foobar", 00:12:10.181 "method": "nvmf_create_subsystem", 00:12:10.181 "req_id": 1 00:12:10.181 } 00:12:10.181 Got JSON-RPC error response 00:12:10.181 response: 00:12:10.181 { 00:12:10.181 "code": -32603, 00:12:10.181 "message": "Unable to find target foobar" 00:12:10.181 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:10.181 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25773 00:12:10.182 [2024-11-20 10:29:10.711607] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25773: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:10.182 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:10.182 { 00:12:10.182 "nqn": "nqn.2016-06.io.spdk:cnode25773", 00:12:10.182 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:10.182 "method": "nvmf_create_subsystem", 00:12:10.182 "req_id": 1 00:12:10.182 } 00:12:10.182 Got JSON-RPC error response 00:12:10.182 response: 00:12:10.182 { 00:12:10.182 "code": -32602, 00:12:10.182 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:10.182 }' 00:12:10.182 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:10.182 { 00:12:10.182 "nqn": "nqn.2016-06.io.spdk:cnode25773", 00:12:10.182 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:10.182 "method": "nvmf_create_subsystem", 00:12:10.182 "req_id": 1 00:12:10.182 } 00:12:10.182 Got JSON-RPC error response 00:12:10.182 response: 00:12:10.182 { 00:12:10.182 "code": -32602, 00:12:10.182 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:10.182 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.182 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:10.182 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30438 00:12:10.441 [2024-11-20 10:29:10.928332] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30438: invalid model number 'SPDK_Controller' 00:12:10.441 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:10.441 { 00:12:10.441 "nqn": "nqn.2016-06.io.spdk:cnode30438", 00:12:10.441 "model_number": "SPDK_Controller\u001f", 00:12:10.441 "method": "nvmf_create_subsystem", 00:12:10.441 "req_id": 1 00:12:10.441 } 00:12:10.441 Got JSON-RPC error response 00:12:10.441 response: 00:12:10.441 { 00:12:10.441 "code": -32602, 00:12:10.442 "message": "Invalid MN SPDK_Controller\u001f" 00:12:10.442 }' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:10.442 { 00:12:10.442 "nqn": "nqn.2016-06.io.spdk:cnode30438", 00:12:10.442 "model_number": "SPDK_Controller\u001f", 00:12:10.442 "method": "nvmf_create_subsystem", 00:12:10.442 "req_id": 1 00:12:10.442 } 00:12:10.442 Got JSON-RPC error response 00:12:10.442 response: 00:12:10.442 { 00:12:10.442 "code": -32602, 00:12:10.442 "message": "Invalid MN SPDK_Controller\u001f" 00:12:10.442 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:10.442 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:10.442 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'RBv>bwus{Jbp1+TtqD$KB' 00:12:10.443 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'RBv>bwus{Jbp1+TtqD$KB' nqn.2016-06.io.spdk:cnode22217 00:12:10.702 [2024-11-20 10:29:11.281548] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22217: invalid serial number 'RBv>bwus{Jbp1+TtqD$KB' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:10.702 { 00:12:10.702 "nqn": "nqn.2016-06.io.spdk:cnode22217", 00:12:10.702 "serial_number": "RBv>bwus{Jbp1+TtqD$KB", 00:12:10.702 "method": "nvmf_create_subsystem", 00:12:10.702 "req_id": 1 00:12:10.702 } 00:12:10.702 Got JSON-RPC error response 00:12:10.702 response: 00:12:10.702 { 00:12:10.702 "code": -32602, 00:12:10.702 "message": "Invalid SN RBv>bwus{Jbp1+TtqD$KB" 00:12:10.702 }' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:10.702 { 00:12:10.702 "nqn": "nqn.2016-06.io.spdk:cnode22217", 00:12:10.702 "serial_number": "RBv>bwus{Jbp1+TtqD$KB", 00:12:10.702 "method": "nvmf_create_subsystem", 00:12:10.702 "req_id": 1 00:12:10.702 } 00:12:10.702 Got JSON-RPC error response 00:12:10.702 response: 00:12:10.702 { 00:12:10.702 "code": -32602, 00:12:10.702 "message": "Invalid SN RBv>bwus{Jbp1+TtqD$KB" 00:12:10.702 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.702 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'V,4p|_wGVx:i>'\''Pj8gTrp;#qX2bL4Lw=1T 1i[;k' 00:12:10.962 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'V,4p|_wGVx:i>'\''Pj8gTrp;#qX2bL4Lw=1T 1i[;k' nqn.2016-06.io.spdk:cnode27054 00:12:11.221 [2024-11-20 10:29:11.759279] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27054: invalid model number 'V,4p|_wGVx:i>'Pj8gTrp;#qX2bL4Lw=1T 1i[;k' 00:12:11.221 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:11.221 { 00:12:11.221 "nqn": "nqn.2016-06.io.spdk:cnode27054", 00:12:11.221 "model_number": "V,4p|_wGVx:i>'\''Pj8gTrp;#qX2bL4Lw=1T 1\u007fi[;k", 00:12:11.221 "method": "nvmf_create_subsystem", 00:12:11.221 "req_id": 1 00:12:11.221 } 00:12:11.221 Got JSON-RPC error response 00:12:11.221 response: 00:12:11.221 { 00:12:11.221 "code": -32602, 00:12:11.221 "message": "Invalid MN V,4p|_wGVx:i>'\''Pj8gTrp;#qX2bL4Lw=1T 1\u007fi[;k" 00:12:11.221 }' 00:12:11.221 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:11.221 { 00:12:11.221 "nqn": "nqn.2016-06.io.spdk:cnode27054", 00:12:11.221 "model_number": "V,4p|_wGVx:i>'Pj8gTrp;#qX2bL4Lw=1T 1\u007fi[;k", 00:12:11.221 "method": "nvmf_create_subsystem", 00:12:11.221 "req_id": 1 00:12:11.221 } 00:12:11.221 Got JSON-RPC error response 00:12:11.221 response: 00:12:11.221 { 00:12:11.221 "code": -32602, 00:12:11.221 "message": "Invalid MN V,4p|_wGVx:i>'Pj8gTrp;#qX2bL4Lw=1T 1\u007fi[;k" 00:12:11.221 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:11.221 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:11.480 [2024-11-20 10:29:11.968049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.480 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:11.480 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:11.480 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:11.480 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:11.739 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:11.739 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:11.739 [2024-11-20 10:29:12.385455] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:11.739 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:11.739 { 00:12:11.739 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:11.739 "listen_address": { 00:12:11.739 "trtype": "tcp", 00:12:11.739 "traddr": "", 00:12:11.739 "trsvcid": "4421" 00:12:11.739 }, 00:12:11.739 "method": "nvmf_subsystem_remove_listener", 00:12:11.739 "req_id": 1 00:12:11.739 } 00:12:11.739 Got JSON-RPC error response 00:12:11.739 response: 00:12:11.739 { 00:12:11.739 "code": -32602, 00:12:11.739 "message": "Invalid parameters" 00:12:11.739 }' 00:12:11.739 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:11.739 { 00:12:11.739 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:11.739 "listen_address": { 00:12:11.739 "trtype": "tcp", 00:12:11.739 "traddr": "", 00:12:11.739 "trsvcid": "4421" 00:12:11.739 }, 00:12:11.739 "method": "nvmf_subsystem_remove_listener", 00:12:11.739 "req_id": 1 00:12:11.739 } 00:12:11.739 Got JSON-RPC error response 00:12:11.739 response: 00:12:11.739 { 00:12:11.739 "code": -32602, 00:12:11.739 "message": "Invalid parameters" 00:12:11.739 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:11.739 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1574 -i 0 00:12:11.998 [2024-11-20 10:29:12.598131] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1574: invalid cntlid range [0-65519] 00:12:11.998 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:11.998 { 00:12:11.998 "nqn": "nqn.2016-06.io.spdk:cnode1574", 00:12:11.998 "min_cntlid": 0, 00:12:11.998 "method": "nvmf_create_subsystem", 00:12:11.998 "req_id": 1 00:12:11.998 } 00:12:11.998 Got JSON-RPC error response 00:12:11.998 response: 00:12:11.998 { 00:12:11.998 "code": -32602, 00:12:11.998 "message": "Invalid cntlid range [0-65519]" 00:12:11.998 }' 00:12:11.998 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:11.998 { 00:12:11.998 "nqn": "nqn.2016-06.io.spdk:cnode1574", 00:12:11.998 "min_cntlid": 0, 00:12:11.998 "method": "nvmf_create_subsystem", 00:12:11.998 "req_id": 1 00:12:11.998 } 00:12:11.998 Got JSON-RPC error response 00:12:11.998 response: 00:12:11.998 { 00:12:11.998 "code": -32602, 00:12:11.998 "message": "Invalid cntlid range [0-65519]" 00:12:11.998 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.998 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2914 -i 65520 00:12:12.257 [2024-11-20 10:29:12.806842] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2914: invalid cntlid range [65520-65519] 00:12:12.257 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:12.257 { 00:12:12.257 "nqn": "nqn.2016-06.io.spdk:cnode2914", 00:12:12.257 "min_cntlid": 65520, 00:12:12.257 "method": "nvmf_create_subsystem", 00:12:12.257 "req_id": 1 00:12:12.257 } 00:12:12.257 Got JSON-RPC error response 00:12:12.257 response: 00:12:12.257 { 00:12:12.257 "code": -32602, 00:12:12.257 "message": "Invalid cntlid range [65520-65519]" 00:12:12.257 }' 00:12:12.257 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:12.257 { 00:12:12.257 "nqn": "nqn.2016-06.io.spdk:cnode2914", 00:12:12.257 "min_cntlid": 65520, 00:12:12.257 "method": "nvmf_create_subsystem", 00:12:12.257 "req_id": 1 00:12:12.257 } 00:12:12.257 Got JSON-RPC error response 00:12:12.257 response: 00:12:12.257 { 00:12:12.257 "code": -32602, 00:12:12.257 "message": "Invalid cntlid range [65520-65519]" 00:12:12.257 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.257 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20533 -I 0 00:12:12.516 [2024-11-20 10:29:13.007505] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20533: invalid cntlid range [1-0] 00:12:12.516 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:12.516 { 00:12:12.516 "nqn": "nqn.2016-06.io.spdk:cnode20533", 00:12:12.516 "max_cntlid": 0, 00:12:12.516 "method": "nvmf_create_subsystem", 00:12:12.516 "req_id": 1 00:12:12.516 } 00:12:12.516 Got JSON-RPC error response 00:12:12.516 response: 00:12:12.516 { 00:12:12.516 "code": -32602, 00:12:12.516 "message": "Invalid cntlid range [1-0]" 00:12:12.516 }' 00:12:12.516 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:12.516 { 00:12:12.516 "nqn": "nqn.2016-06.io.spdk:cnode20533", 00:12:12.516 "max_cntlid": 0, 00:12:12.516 "method": "nvmf_create_subsystem", 00:12:12.516 "req_id": 1 00:12:12.516 } 00:12:12.516 Got JSON-RPC error response 00:12:12.516 response: 00:12:12.516 { 00:12:12.516 "code": -32602, 00:12:12.516 "message": "Invalid cntlid range [1-0]" 00:12:12.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.516 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6160 -I 65520 00:12:12.516 [2024-11-20 10:29:13.204207] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6160: invalid cntlid range [1-65520] 00:12:12.516 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:12.516 { 00:12:12.516 "nqn": "nqn.2016-06.io.spdk:cnode6160", 00:12:12.516 "max_cntlid": 65520, 00:12:12.516 "method": "nvmf_create_subsystem", 00:12:12.516 "req_id": 1 00:12:12.516 } 00:12:12.516 Got JSON-RPC error response 00:12:12.516 response: 00:12:12.516 { 00:12:12.516 "code": -32602, 00:12:12.516 "message": "Invalid cntlid range [1-65520]" 00:12:12.516 }' 00:12:12.516 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:12.516 { 00:12:12.516 "nqn": "nqn.2016-06.io.spdk:cnode6160", 00:12:12.516 "max_cntlid": 65520, 00:12:12.516 "method": "nvmf_create_subsystem", 00:12:12.516 "req_id": 1 00:12:12.516 } 00:12:12.516 Got JSON-RPC error response 00:12:12.516 response: 00:12:12.516 { 00:12:12.516 "code": -32602, 00:12:12.516 "message": "Invalid cntlid range [1-65520]" 00:12:12.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.516 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24899 -i 6 -I 5 00:12:12.775 [2024-11-20 10:29:13.404866] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24899: invalid cntlid range [6-5] 00:12:12.775 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:12.775 { 00:12:12.775 "nqn": "nqn.2016-06.io.spdk:cnode24899", 00:12:12.775 "min_cntlid": 6, 00:12:12.775 "max_cntlid": 5, 00:12:12.775 "method": "nvmf_create_subsystem", 00:12:12.775 "req_id": 1 00:12:12.775 } 00:12:12.775 Got JSON-RPC error response 00:12:12.775 response: 00:12:12.775 { 00:12:12.775 "code": -32602, 00:12:12.775 "message": "Invalid cntlid range [6-5]" 00:12:12.775 }' 00:12:12.775 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:12.775 { 00:12:12.775 "nqn": "nqn.2016-06.io.spdk:cnode24899", 00:12:12.775 "min_cntlid": 6, 00:12:12.775 "max_cntlid": 5, 00:12:12.775 "method": "nvmf_create_subsystem", 00:12:12.775 "req_id": 1 00:12:12.775 } 00:12:12.775 Got JSON-RPC error response 00:12:12.775 response: 00:12:12.775 { 00:12:12.775 "code": -32602, 00:12:12.775 "message": "Invalid cntlid range [6-5]" 00:12:12.775 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.775 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:13.034 { 00:12:13.034 "name": "foobar", 00:12:13.034 "method": "nvmf_delete_target", 00:12:13.034 "req_id": 1 00:12:13.034 } 00:12:13.034 Got JSON-RPC error response 00:12:13.034 response: 00:12:13.034 { 00:12:13.034 "code": -32602, 00:12:13.034 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:13.034 }' 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:13.034 { 00:12:13.034 "name": "foobar", 00:12:13.034 "method": "nvmf_delete_target", 00:12:13.034 "req_id": 1 00:12:13.034 } 00:12:13.034 Got JSON-RPC error response 00:12:13.034 response: 00:12:13.034 { 00:12:13.034 "code": -32602, 00:12:13.034 "message": "The specified target doesn't exist, cannot delete it." 00:12:13.034 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.034 rmmod nvme_tcp 00:12:13.034 rmmod nvme_fabrics 00:12:13.034 rmmod nvme_keyring 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3425579 ']' 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3425579 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3425579 ']' 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3425579 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3425579 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3425579' 00:12:13.034 killing process with pid 3425579 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3425579 00:12:13.034 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3425579 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.293 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.196 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.196 00:12:15.196 real 0m12.080s 00:12:15.196 user 0m18.702s 00:12:15.196 sys 0m5.518s 00:12:15.196 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.197 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.197 ************************************ 00:12:15.197 END TEST nvmf_invalid 00:12:15.197 ************************************ 00:12:15.197 10:29:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:15.197 10:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.197 10:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.197 10:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.456 ************************************ 00:12:15.456 START TEST nvmf_connect_stress 00:12:15.456 ************************************ 00:12:15.456 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:15.456 * Looking for test storage... 00:12:15.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.456 --rc genhtml_branch_coverage=1 00:12:15.456 --rc genhtml_function_coverage=1 00:12:15.456 --rc genhtml_legend=1 00:12:15.456 --rc geninfo_all_blocks=1 00:12:15.456 --rc geninfo_unexecuted_blocks=1 00:12:15.456 00:12:15.456 ' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.456 --rc genhtml_branch_coverage=1 00:12:15.456 --rc genhtml_function_coverage=1 00:12:15.456 --rc genhtml_legend=1 00:12:15.456 --rc geninfo_all_blocks=1 00:12:15.456 --rc geninfo_unexecuted_blocks=1 00:12:15.456 00:12:15.456 ' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.456 --rc genhtml_branch_coverage=1 00:12:15.456 --rc genhtml_function_coverage=1 00:12:15.456 --rc genhtml_legend=1 00:12:15.456 --rc geninfo_all_blocks=1 00:12:15.456 --rc geninfo_unexecuted_blocks=1 00:12:15.456 00:12:15.456 ' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.456 --rc genhtml_branch_coverage=1 00:12:15.456 --rc genhtml_function_coverage=1 00:12:15.456 --rc genhtml_legend=1 00:12:15.456 --rc geninfo_all_blocks=1 00:12:15.456 --rc geninfo_unexecuted_blocks=1 00:12:15.456 00:12:15.456 ' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.456 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:22.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:22.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:22.145 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:22.146 Found net devices under 0000:86:00.0: cvl_0_0 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:22.146 Found net devices under 0000:86:00.1: cvl_0_1 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.146 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:12:22.146 00:12:22.146 --- 10.0.0.2 ping statistics --- 00:12:22.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.146 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:12:22.146 00:12:22.146 --- 10.0.0.1 ping statistics --- 00:12:22.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.146 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3429750 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3429750 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3429750 ']' 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.146 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 [2024-11-20 10:29:22.213188] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:12:22.147 [2024-11-20 10:29:22.213238] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.147 [2024-11-20 10:29:22.297485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.147 [2024-11-20 10:29:22.337501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.147 [2024-11-20 10:29:22.337538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.147 [2024-11-20 10:29:22.337547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.147 [2024-11-20 10:29:22.337553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.147 [2024-11-20 10:29:22.337559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.147 [2024-11-20 10:29:22.339008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.147 [2024-11-20 10:29:22.339118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.147 [2024-11-20 10:29:22.339119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 [2024-11-20 10:29:22.487550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 [2024-11-20 10:29:22.507800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 NULL1 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3429943 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.147 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.406 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.406 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:22.406 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.406 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.406 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.665 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.665 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:22.665 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.665 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.665 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.923 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.923 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:22.923 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.923 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.923 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.181 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.181 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:23.181 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.181 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.181 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.748 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.748 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:23.748 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.748 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.748 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.006 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.006 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:24.006 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.006 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.006 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.264 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.264 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:24.264 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.264 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.264 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.522 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.522 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:24.522 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.522 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.522 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.089 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.089 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:25.089 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.089 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.089 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.347 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.347 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:25.347 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.347 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.347 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.605 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.605 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:25.605 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.605 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.605 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.864 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.864 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:25.864 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.864 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.864 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.122 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.122 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:26.122 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.122 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.122 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.688 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.689 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:26.689 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.689 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.689 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.947 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.947 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:26.947 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.947 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.947 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.205 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.205 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:27.205 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.205 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.205 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.463 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.463 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:27.463 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.463 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.463 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.030 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.030 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:28.030 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.030 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.030 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.288 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.288 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:28.288 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.288 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.288 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.546 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.546 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:28.546 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.546 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.546 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.804 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.804 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:28.804 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.804 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.804 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.062 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.062 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:29.062 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.062 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.062 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.629 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.629 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:29.629 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.629 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.629 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.888 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.888 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:29.888 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.888 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.888 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.146 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.146 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:30.146 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.146 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.146 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.405 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.405 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:30.405 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.405 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.405 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.972 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.972 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:30.972 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.972 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.972 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.230 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.230 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:31.230 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.230 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.230 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:31.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.747 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.747 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:31.747 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.747 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.747 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.005 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:32.005 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.005 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.005 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3429943 00:12:32.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3429943) - No such process 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3429943 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.572 rmmod nvme_tcp 00:12:32.572 rmmod nvme_fabrics 00:12:32.572 rmmod nvme_keyring 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3429750 ']' 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3429750 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3429750 ']' 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3429750 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3429750 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3429750' 00:12:32.572 killing process with pid 3429750 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3429750 00:12:32.572 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3429750 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.831 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:34.736 00:12:34.736 real 0m19.424s 00:12:34.736 user 0m40.501s 00:12:34.736 sys 0m8.605s 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.736 ************************************ 00:12:34.736 END TEST nvmf_connect_stress 00:12:34.736 ************************************ 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.736 ************************************ 00:12:34.736 START TEST nvmf_fused_ordering 00:12:34.736 ************************************ 00:12:34.736 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:34.996 * Looking for test storage... 00:12:34.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.996 --rc genhtml_branch_coverage=1 00:12:34.996 --rc genhtml_function_coverage=1 00:12:34.996 --rc genhtml_legend=1 00:12:34.996 --rc geninfo_all_blocks=1 00:12:34.996 --rc geninfo_unexecuted_blocks=1 00:12:34.996 00:12:34.996 ' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.996 --rc genhtml_branch_coverage=1 00:12:34.996 --rc genhtml_function_coverage=1 00:12:34.996 --rc genhtml_legend=1 00:12:34.996 --rc geninfo_all_blocks=1 00:12:34.996 --rc geninfo_unexecuted_blocks=1 00:12:34.996 00:12:34.996 ' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.996 --rc genhtml_branch_coverage=1 00:12:34.996 --rc genhtml_function_coverage=1 00:12:34.996 --rc genhtml_legend=1 00:12:34.996 --rc geninfo_all_blocks=1 00:12:34.996 --rc geninfo_unexecuted_blocks=1 00:12:34.996 00:12:34.996 ' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.996 --rc genhtml_branch_coverage=1 00:12:34.996 --rc genhtml_function_coverage=1 00:12:34.996 --rc genhtml_legend=1 00:12:34.996 --rc geninfo_all_blocks=1 00:12:34.996 --rc geninfo_unexecuted_blocks=1 00:12:34.996 00:12:34.996 ' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.996 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.997 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:41.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.567 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:41.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:41.568 Found net devices under 0000:86:00.0: cvl_0_0 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:41.568 Found net devices under 0000:86:00.1: cvl_0_1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:12:41.568 00:12:41.568 --- 10.0.0.2 ping statistics --- 00:12:41.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.568 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:41.568 00:12:41.568 --- 10.0.0.1 ping statistics --- 00:12:41.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.568 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3435147 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3435147 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3435147 ']' 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.568 [2024-11-20 10:29:41.691698] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:12:41.568 [2024-11-20 10:29:41.691743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.568 [2024-11-20 10:29:41.769317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.568 [2024-11-20 10:29:41.810175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.568 [2024-11-20 10:29:41.810211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.568 [2024-11-20 10:29:41.810219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.568 [2024-11-20 10:29:41.810227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.568 [2024-11-20 10:29:41.810232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.568 [2024-11-20 10:29:41.810772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:41.568 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 [2024-11-20 10:29:41.944839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 [2024-11-20 10:29:41.965056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 NULL1 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.569 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.569 [2024-11-20 10:29:42.024545] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:12:41.569 [2024-11-20 10:29:42.024587] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435185 ] 00:12:41.828 Attached to nqn.2016-06.io.spdk:cnode1 00:12:41.828 Namespace ID: 1 size: 1GB 00:12:41.828 fused_ordering(0) 00:12:41.828 fused_ordering(1) 00:12:41.828 fused_ordering(2) 00:12:41.828 fused_ordering(3) 00:12:41.828 fused_ordering(4) 00:12:41.828 fused_ordering(5) 00:12:41.828 fused_ordering(6) 00:12:41.828 fused_ordering(7) 00:12:41.828 fused_ordering(8) 00:12:41.828 fused_ordering(9) 00:12:41.828 fused_ordering(10) 00:12:41.828 fused_ordering(11) 00:12:41.828 fused_ordering(12) 00:12:41.828 fused_ordering(13) 00:12:41.828 fused_ordering(14) 00:12:41.828 fused_ordering(15) 00:12:41.828 fused_ordering(16) 00:12:41.828 fused_ordering(17) 00:12:41.828 fused_ordering(18) 00:12:41.828 fused_ordering(19) 00:12:41.828 fused_ordering(20) 00:12:41.828 fused_ordering(21) 00:12:41.828 fused_ordering(22) 00:12:41.828 fused_ordering(23) 00:12:41.828 fused_ordering(24) 00:12:41.828 fused_ordering(25) 00:12:41.828 fused_ordering(26) 00:12:41.828 fused_ordering(27) 00:12:41.828 fused_ordering(28) 00:12:41.828 fused_ordering(29) 00:12:41.828 fused_ordering(30) 00:12:41.828 fused_ordering(31) 00:12:41.828 fused_ordering(32) 00:12:41.828 fused_ordering(33) 00:12:41.828 fused_ordering(34) 00:12:41.828 fused_ordering(35) 00:12:41.828 fused_ordering(36) 00:12:41.828 fused_ordering(37) 00:12:41.828 fused_ordering(38) 00:12:41.828 fused_ordering(39) 00:12:41.828 fused_ordering(40) 00:12:41.828 fused_ordering(41) 00:12:41.828 fused_ordering(42) 00:12:41.828 fused_ordering(43) 00:12:41.828 fused_ordering(44) 00:12:41.828 fused_ordering(45) 00:12:41.828 fused_ordering(46) 00:12:41.828 fused_ordering(47) 00:12:41.828 fused_ordering(48) 00:12:41.828 fused_ordering(49) 00:12:41.828 fused_ordering(50) 00:12:41.828 fused_ordering(51) 00:12:41.828 fused_ordering(52) 00:12:41.828 fused_ordering(53) 00:12:41.828 fused_ordering(54) 00:12:41.828 fused_ordering(55) 00:12:41.828 fused_ordering(56) 00:12:41.828 fused_ordering(57) 00:12:41.828 fused_ordering(58) 00:12:41.828 fused_ordering(59) 00:12:41.828 fused_ordering(60) 00:12:41.828 fused_ordering(61) 00:12:41.828 fused_ordering(62) 00:12:41.828 fused_ordering(63) 00:12:41.828 fused_ordering(64) 00:12:41.828 fused_ordering(65) 00:12:41.828 fused_ordering(66) 00:12:41.828 fused_ordering(67) 00:12:41.828 fused_ordering(68) 00:12:41.828 fused_ordering(69) 00:12:41.828 fused_ordering(70) 00:12:41.828 fused_ordering(71) 00:12:41.828 fused_ordering(72) 00:12:41.829 fused_ordering(73) 00:12:41.829 fused_ordering(74) 00:12:41.829 fused_ordering(75) 00:12:41.829 fused_ordering(76) 00:12:41.829 fused_ordering(77) 00:12:41.829 fused_ordering(78) 00:12:41.829 fused_ordering(79) 00:12:41.829 fused_ordering(80) 00:12:41.829 fused_ordering(81) 00:12:41.829 fused_ordering(82) 00:12:41.829 fused_ordering(83) 00:12:41.829 fused_ordering(84) 00:12:41.829 fused_ordering(85) 00:12:41.829 fused_ordering(86) 00:12:41.829 fused_ordering(87) 00:12:41.829 fused_ordering(88) 00:12:41.829 fused_ordering(89) 00:12:41.829 fused_ordering(90) 00:12:41.829 fused_ordering(91) 00:12:41.829 fused_ordering(92) 00:12:41.829 fused_ordering(93) 00:12:41.829 fused_ordering(94) 00:12:41.829 fused_ordering(95) 00:12:41.829 fused_ordering(96) 00:12:41.829 fused_ordering(97) 00:12:41.829 fused_ordering(98) 00:12:41.829 fused_ordering(99) 00:12:41.829 fused_ordering(100) 00:12:41.829 fused_ordering(101) 00:12:41.829 fused_ordering(102) 00:12:41.829 fused_ordering(103) 00:12:41.829 fused_ordering(104) 00:12:41.829 fused_ordering(105) 00:12:41.829 fused_ordering(106) 00:12:41.829 fused_ordering(107) 00:12:41.829 fused_ordering(108) 00:12:41.829 fused_ordering(109) 00:12:41.829 fused_ordering(110) 00:12:41.829 fused_ordering(111) 00:12:41.829 fused_ordering(112) 00:12:41.829 fused_ordering(113) 00:12:41.829 fused_ordering(114) 00:12:41.829 fused_ordering(115) 00:12:41.829 fused_ordering(116) 00:12:41.829 fused_ordering(117) 00:12:41.829 fused_ordering(118) 00:12:41.829 fused_ordering(119) 00:12:41.829 fused_ordering(120) 00:12:41.829 fused_ordering(121) 00:12:41.829 fused_ordering(122) 00:12:41.829 fused_ordering(123) 00:12:41.829 fused_ordering(124) 00:12:41.829 fused_ordering(125) 00:12:41.829 fused_ordering(126) 00:12:41.829 fused_ordering(127) 00:12:41.829 fused_ordering(128) 00:12:41.829 fused_ordering(129) 00:12:41.829 fused_ordering(130) 00:12:41.829 fused_ordering(131) 00:12:41.829 fused_ordering(132) 00:12:41.829 fused_ordering(133) 00:12:41.829 fused_ordering(134) 00:12:41.829 fused_ordering(135) 00:12:41.829 fused_ordering(136) 00:12:41.829 fused_ordering(137) 00:12:41.829 fused_ordering(138) 00:12:41.829 fused_ordering(139) 00:12:41.829 fused_ordering(140) 00:12:41.829 fused_ordering(141) 00:12:41.829 fused_ordering(142) 00:12:41.829 fused_ordering(143) 00:12:41.829 fused_ordering(144) 00:12:41.829 fused_ordering(145) 00:12:41.829 fused_ordering(146) 00:12:41.829 fused_ordering(147) 00:12:41.829 fused_ordering(148) 00:12:41.829 fused_ordering(149) 00:12:41.829 fused_ordering(150) 00:12:41.829 fused_ordering(151) 00:12:41.829 fused_ordering(152) 00:12:41.829 fused_ordering(153) 00:12:41.829 fused_ordering(154) 00:12:41.829 fused_ordering(155) 00:12:41.829 fused_ordering(156) 00:12:41.829 fused_ordering(157) 00:12:41.829 fused_ordering(158) 00:12:41.829 fused_ordering(159) 00:12:41.829 fused_ordering(160) 00:12:41.829 fused_ordering(161) 00:12:41.829 fused_ordering(162) 00:12:41.829 fused_ordering(163) 00:12:41.829 fused_ordering(164) 00:12:41.829 fused_ordering(165) 00:12:41.829 fused_ordering(166) 00:12:41.829 fused_ordering(167) 00:12:41.829 fused_ordering(168) 00:12:41.829 fused_ordering(169) 00:12:41.829 fused_ordering(170) 00:12:41.829 fused_ordering(171) 00:12:41.829 fused_ordering(172) 00:12:41.829 fused_ordering(173) 00:12:41.829 fused_ordering(174) 00:12:41.829 fused_ordering(175) 00:12:41.829 fused_ordering(176) 00:12:41.829 fused_ordering(177) 00:12:41.829 fused_ordering(178) 00:12:41.829 fused_ordering(179) 00:12:41.829 fused_ordering(180) 00:12:41.829 fused_ordering(181) 00:12:41.829 fused_ordering(182) 00:12:41.829 fused_ordering(183) 00:12:41.829 fused_ordering(184) 00:12:41.829 fused_ordering(185) 00:12:41.829 fused_ordering(186) 00:12:41.829 fused_ordering(187) 00:12:41.829 fused_ordering(188) 00:12:41.829 fused_ordering(189) 00:12:41.829 fused_ordering(190) 00:12:41.829 fused_ordering(191) 00:12:41.829 fused_ordering(192) 00:12:41.829 fused_ordering(193) 00:12:41.829 fused_ordering(194) 00:12:41.829 fused_ordering(195) 00:12:41.829 fused_ordering(196) 00:12:41.829 fused_ordering(197) 00:12:41.829 fused_ordering(198) 00:12:41.829 fused_ordering(199) 00:12:41.829 fused_ordering(200) 00:12:41.829 fused_ordering(201) 00:12:41.829 fused_ordering(202) 00:12:41.829 fused_ordering(203) 00:12:41.829 fused_ordering(204) 00:12:41.829 fused_ordering(205) 00:12:42.087 fused_ordering(206) 00:12:42.087 fused_ordering(207) 00:12:42.087 fused_ordering(208) 00:12:42.087 fused_ordering(209) 00:12:42.087 fused_ordering(210) 00:12:42.087 fused_ordering(211) 00:12:42.087 fused_ordering(212) 00:12:42.087 fused_ordering(213) 00:12:42.087 fused_ordering(214) 00:12:42.087 fused_ordering(215) 00:12:42.087 fused_ordering(216) 00:12:42.087 fused_ordering(217) 00:12:42.087 fused_ordering(218) 00:12:42.087 fused_ordering(219) 00:12:42.087 fused_ordering(220) 00:12:42.087 fused_ordering(221) 00:12:42.087 fused_ordering(222) 00:12:42.087 fused_ordering(223) 00:12:42.087 fused_ordering(224) 00:12:42.087 fused_ordering(225) 00:12:42.087 fused_ordering(226) 00:12:42.087 fused_ordering(227) 00:12:42.087 fused_ordering(228) 00:12:42.087 fused_ordering(229) 00:12:42.087 fused_ordering(230) 00:12:42.087 fused_ordering(231) 00:12:42.087 fused_ordering(232) 00:12:42.087 fused_ordering(233) 00:12:42.087 fused_ordering(234) 00:12:42.087 fused_ordering(235) 00:12:42.087 fused_ordering(236) 00:12:42.087 fused_ordering(237) 00:12:42.087 fused_ordering(238) 00:12:42.087 fused_ordering(239) 00:12:42.087 fused_ordering(240) 00:12:42.087 fused_ordering(241) 00:12:42.087 fused_ordering(242) 00:12:42.087 fused_ordering(243) 00:12:42.087 fused_ordering(244) 00:12:42.087 fused_ordering(245) 00:12:42.087 fused_ordering(246) 00:12:42.087 fused_ordering(247) 00:12:42.087 fused_ordering(248) 00:12:42.087 fused_ordering(249) 00:12:42.087 fused_ordering(250) 00:12:42.087 fused_ordering(251) 00:12:42.087 fused_ordering(252) 00:12:42.087 fused_ordering(253) 00:12:42.087 fused_ordering(254) 00:12:42.087 fused_ordering(255) 00:12:42.087 fused_ordering(256) 00:12:42.087 fused_ordering(257) 00:12:42.087 fused_ordering(258) 00:12:42.087 fused_ordering(259) 00:12:42.087 fused_ordering(260) 00:12:42.087 fused_ordering(261) 00:12:42.087 fused_ordering(262) 00:12:42.087 fused_ordering(263) 00:12:42.087 fused_ordering(264) 00:12:42.087 fused_ordering(265) 00:12:42.087 fused_ordering(266) 00:12:42.087 fused_ordering(267) 00:12:42.087 fused_ordering(268) 00:12:42.087 fused_ordering(269) 00:12:42.087 fused_ordering(270) 00:12:42.087 fused_ordering(271) 00:12:42.087 fused_ordering(272) 00:12:42.087 fused_ordering(273) 00:12:42.087 fused_ordering(274) 00:12:42.087 fused_ordering(275) 00:12:42.087 fused_ordering(276) 00:12:42.087 fused_ordering(277) 00:12:42.087 fused_ordering(278) 00:12:42.087 fused_ordering(279) 00:12:42.087 fused_ordering(280) 00:12:42.087 fused_ordering(281) 00:12:42.087 fused_ordering(282) 00:12:42.087 fused_ordering(283) 00:12:42.087 fused_ordering(284) 00:12:42.087 fused_ordering(285) 00:12:42.087 fused_ordering(286) 00:12:42.087 fused_ordering(287) 00:12:42.087 fused_ordering(288) 00:12:42.087 fused_ordering(289) 00:12:42.087 fused_ordering(290) 00:12:42.087 fused_ordering(291) 00:12:42.087 fused_ordering(292) 00:12:42.087 fused_ordering(293) 00:12:42.087 fused_ordering(294) 00:12:42.087 fused_ordering(295) 00:12:42.087 fused_ordering(296) 00:12:42.087 fused_ordering(297) 00:12:42.087 fused_ordering(298) 00:12:42.087 fused_ordering(299) 00:12:42.087 fused_ordering(300) 00:12:42.087 fused_ordering(301) 00:12:42.087 fused_ordering(302) 00:12:42.087 fused_ordering(303) 00:12:42.087 fused_ordering(304) 00:12:42.087 fused_ordering(305) 00:12:42.087 fused_ordering(306) 00:12:42.087 fused_ordering(307) 00:12:42.087 fused_ordering(308) 00:12:42.087 fused_ordering(309) 00:12:42.087 fused_ordering(310) 00:12:42.087 fused_ordering(311) 00:12:42.087 fused_ordering(312) 00:12:42.087 fused_ordering(313) 00:12:42.087 fused_ordering(314) 00:12:42.087 fused_ordering(315) 00:12:42.087 fused_ordering(316) 00:12:42.087 fused_ordering(317) 00:12:42.087 fused_ordering(318) 00:12:42.087 fused_ordering(319) 00:12:42.087 fused_ordering(320) 00:12:42.087 fused_ordering(321) 00:12:42.087 fused_ordering(322) 00:12:42.087 fused_ordering(323) 00:12:42.087 fused_ordering(324) 00:12:42.087 fused_ordering(325) 00:12:42.087 fused_ordering(326) 00:12:42.087 fused_ordering(327) 00:12:42.087 fused_ordering(328) 00:12:42.087 fused_ordering(329) 00:12:42.087 fused_ordering(330) 00:12:42.087 fused_ordering(331) 00:12:42.087 fused_ordering(332) 00:12:42.087 fused_ordering(333) 00:12:42.087 fused_ordering(334) 00:12:42.087 fused_ordering(335) 00:12:42.087 fused_ordering(336) 00:12:42.087 fused_ordering(337) 00:12:42.088 fused_ordering(338) 00:12:42.088 fused_ordering(339) 00:12:42.088 fused_ordering(340) 00:12:42.088 fused_ordering(341) 00:12:42.088 fused_ordering(342) 00:12:42.088 fused_ordering(343) 00:12:42.088 fused_ordering(344) 00:12:42.088 fused_ordering(345) 00:12:42.088 fused_ordering(346) 00:12:42.088 fused_ordering(347) 00:12:42.088 fused_ordering(348) 00:12:42.088 fused_ordering(349) 00:12:42.088 fused_ordering(350) 00:12:42.088 fused_ordering(351) 00:12:42.088 fused_ordering(352) 00:12:42.088 fused_ordering(353) 00:12:42.088 fused_ordering(354) 00:12:42.088 fused_ordering(355) 00:12:42.088 fused_ordering(356) 00:12:42.088 fused_ordering(357) 00:12:42.088 fused_ordering(358) 00:12:42.088 fused_ordering(359) 00:12:42.088 fused_ordering(360) 00:12:42.088 fused_ordering(361) 00:12:42.088 fused_ordering(362) 00:12:42.088 fused_ordering(363) 00:12:42.088 fused_ordering(364) 00:12:42.088 fused_ordering(365) 00:12:42.088 fused_ordering(366) 00:12:42.088 fused_ordering(367) 00:12:42.088 fused_ordering(368) 00:12:42.088 fused_ordering(369) 00:12:42.088 fused_ordering(370) 00:12:42.088 fused_ordering(371) 00:12:42.088 fused_ordering(372) 00:12:42.088 fused_ordering(373) 00:12:42.088 fused_ordering(374) 00:12:42.088 fused_ordering(375) 00:12:42.088 fused_ordering(376) 00:12:42.088 fused_ordering(377) 00:12:42.088 fused_ordering(378) 00:12:42.088 fused_ordering(379) 00:12:42.088 fused_ordering(380) 00:12:42.088 fused_ordering(381) 00:12:42.088 fused_ordering(382) 00:12:42.088 fused_ordering(383) 00:12:42.088 fused_ordering(384) 00:12:42.088 fused_ordering(385) 00:12:42.088 fused_ordering(386) 00:12:42.088 fused_ordering(387) 00:12:42.088 fused_ordering(388) 00:12:42.088 fused_ordering(389) 00:12:42.088 fused_ordering(390) 00:12:42.088 fused_ordering(391) 00:12:42.088 fused_ordering(392) 00:12:42.088 fused_ordering(393) 00:12:42.088 fused_ordering(394) 00:12:42.088 fused_ordering(395) 00:12:42.088 fused_ordering(396) 00:12:42.088 fused_ordering(397) 00:12:42.088 fused_ordering(398) 00:12:42.088 fused_ordering(399) 00:12:42.088 fused_ordering(400) 00:12:42.088 fused_ordering(401) 00:12:42.088 fused_ordering(402) 00:12:42.088 fused_ordering(403) 00:12:42.088 fused_ordering(404) 00:12:42.088 fused_ordering(405) 00:12:42.088 fused_ordering(406) 00:12:42.088 fused_ordering(407) 00:12:42.088 fused_ordering(408) 00:12:42.088 fused_ordering(409) 00:12:42.088 fused_ordering(410) 00:12:42.348 fused_ordering(411) 00:12:42.348 fused_ordering(412) 00:12:42.348 fused_ordering(413) 00:12:42.348 fused_ordering(414) 00:12:42.348 fused_ordering(415) 00:12:42.348 fused_ordering(416) 00:12:42.348 fused_ordering(417) 00:12:42.348 fused_ordering(418) 00:12:42.348 fused_ordering(419) 00:12:42.348 fused_ordering(420) 00:12:42.348 fused_ordering(421) 00:12:42.348 fused_ordering(422) 00:12:42.348 fused_ordering(423) 00:12:42.348 fused_ordering(424) 00:12:42.348 fused_ordering(425) 00:12:42.348 fused_ordering(426) 00:12:42.348 fused_ordering(427) 00:12:42.348 fused_ordering(428) 00:12:42.348 fused_ordering(429) 00:12:42.348 fused_ordering(430) 00:12:42.348 fused_ordering(431) 00:12:42.348 fused_ordering(432) 00:12:42.348 fused_ordering(433) 00:12:42.348 fused_ordering(434) 00:12:42.348 fused_ordering(435) 00:12:42.348 fused_ordering(436) 00:12:42.348 fused_ordering(437) 00:12:42.348 fused_ordering(438) 00:12:42.348 fused_ordering(439) 00:12:42.348 fused_ordering(440) 00:12:42.348 fused_ordering(441) 00:12:42.348 fused_ordering(442) 00:12:42.348 fused_ordering(443) 00:12:42.348 fused_ordering(444) 00:12:42.348 fused_ordering(445) 00:12:42.348 fused_ordering(446) 00:12:42.348 fused_ordering(447) 00:12:42.348 fused_ordering(448) 00:12:42.348 fused_ordering(449) 00:12:42.348 fused_ordering(450) 00:12:42.348 fused_ordering(451) 00:12:42.348 fused_ordering(452) 00:12:42.348 fused_ordering(453) 00:12:42.348 fused_ordering(454) 00:12:42.348 fused_ordering(455) 00:12:42.348 fused_ordering(456) 00:12:42.348 fused_ordering(457) 00:12:42.348 fused_ordering(458) 00:12:42.348 fused_ordering(459) 00:12:42.348 fused_ordering(460) 00:12:42.348 fused_ordering(461) 00:12:42.348 fused_ordering(462) 00:12:42.348 fused_ordering(463) 00:12:42.348 fused_ordering(464) 00:12:42.348 fused_ordering(465) 00:12:42.348 fused_ordering(466) 00:12:42.348 fused_ordering(467) 00:12:42.348 fused_ordering(468) 00:12:42.348 fused_ordering(469) 00:12:42.348 fused_ordering(470) 00:12:42.348 fused_ordering(471) 00:12:42.348 fused_ordering(472) 00:12:42.348 fused_ordering(473) 00:12:42.348 fused_ordering(474) 00:12:42.348 fused_ordering(475) 00:12:42.348 fused_ordering(476) 00:12:42.348 fused_ordering(477) 00:12:42.348 fused_ordering(478) 00:12:42.348 fused_ordering(479) 00:12:42.348 fused_ordering(480) 00:12:42.348 fused_ordering(481) 00:12:42.348 fused_ordering(482) 00:12:42.348 fused_ordering(483) 00:12:42.348 fused_ordering(484) 00:12:42.348 fused_ordering(485) 00:12:42.348 fused_ordering(486) 00:12:42.348 fused_ordering(487) 00:12:42.348 fused_ordering(488) 00:12:42.348 fused_ordering(489) 00:12:42.348 fused_ordering(490) 00:12:42.348 fused_ordering(491) 00:12:42.348 fused_ordering(492) 00:12:42.348 fused_ordering(493) 00:12:42.348 fused_ordering(494) 00:12:42.348 fused_ordering(495) 00:12:42.348 fused_ordering(496) 00:12:42.348 fused_ordering(497) 00:12:42.348 fused_ordering(498) 00:12:42.348 fused_ordering(499) 00:12:42.348 fused_ordering(500) 00:12:42.348 fused_ordering(501) 00:12:42.348 fused_ordering(502) 00:12:42.348 fused_ordering(503) 00:12:42.348 fused_ordering(504) 00:12:42.348 fused_ordering(505) 00:12:42.348 fused_ordering(506) 00:12:42.348 fused_ordering(507) 00:12:42.348 fused_ordering(508) 00:12:42.348 fused_ordering(509) 00:12:42.348 fused_ordering(510) 00:12:42.348 fused_ordering(511) 00:12:42.348 fused_ordering(512) 00:12:42.348 fused_ordering(513) 00:12:42.348 fused_ordering(514) 00:12:42.348 fused_ordering(515) 00:12:42.348 fused_ordering(516) 00:12:42.348 fused_ordering(517) 00:12:42.348 fused_ordering(518) 00:12:42.348 fused_ordering(519) 00:12:42.348 fused_ordering(520) 00:12:42.348 fused_ordering(521) 00:12:42.348 fused_ordering(522) 00:12:42.348 fused_ordering(523) 00:12:42.348 fused_ordering(524) 00:12:42.348 fused_ordering(525) 00:12:42.348 fused_ordering(526) 00:12:42.348 fused_ordering(527) 00:12:42.348 fused_ordering(528) 00:12:42.348 fused_ordering(529) 00:12:42.348 fused_ordering(530) 00:12:42.348 fused_ordering(531) 00:12:42.348 fused_ordering(532) 00:12:42.348 fused_ordering(533) 00:12:42.348 fused_ordering(534) 00:12:42.348 fused_ordering(535) 00:12:42.348 fused_ordering(536) 00:12:42.348 fused_ordering(537) 00:12:42.348 fused_ordering(538) 00:12:42.348 fused_ordering(539) 00:12:42.348 fused_ordering(540) 00:12:42.348 fused_ordering(541) 00:12:42.348 fused_ordering(542) 00:12:42.348 fused_ordering(543) 00:12:42.348 fused_ordering(544) 00:12:42.348 fused_ordering(545) 00:12:42.348 fused_ordering(546) 00:12:42.349 fused_ordering(547) 00:12:42.349 fused_ordering(548) 00:12:42.349 fused_ordering(549) 00:12:42.349 fused_ordering(550) 00:12:42.349 fused_ordering(551) 00:12:42.349 fused_ordering(552) 00:12:42.349 fused_ordering(553) 00:12:42.349 fused_ordering(554) 00:12:42.349 fused_ordering(555) 00:12:42.349 fused_ordering(556) 00:12:42.349 fused_ordering(557) 00:12:42.349 fused_ordering(558) 00:12:42.349 fused_ordering(559) 00:12:42.349 fused_ordering(560) 00:12:42.349 fused_ordering(561) 00:12:42.349 fused_ordering(562) 00:12:42.349 fused_ordering(563) 00:12:42.349 fused_ordering(564) 00:12:42.349 fused_ordering(565) 00:12:42.349 fused_ordering(566) 00:12:42.349 fused_ordering(567) 00:12:42.349 fused_ordering(568) 00:12:42.349 fused_ordering(569) 00:12:42.349 fused_ordering(570) 00:12:42.349 fused_ordering(571) 00:12:42.349 fused_ordering(572) 00:12:42.349 fused_ordering(573) 00:12:42.349 fused_ordering(574) 00:12:42.349 fused_ordering(575) 00:12:42.349 fused_ordering(576) 00:12:42.349 fused_ordering(577) 00:12:42.349 fused_ordering(578) 00:12:42.349 fused_ordering(579) 00:12:42.349 fused_ordering(580) 00:12:42.349 fused_ordering(581) 00:12:42.349 fused_ordering(582) 00:12:42.349 fused_ordering(583) 00:12:42.349 fused_ordering(584) 00:12:42.349 fused_ordering(585) 00:12:42.349 fused_ordering(586) 00:12:42.349 fused_ordering(587) 00:12:42.349 fused_ordering(588) 00:12:42.349 fused_ordering(589) 00:12:42.349 fused_ordering(590) 00:12:42.349 fused_ordering(591) 00:12:42.349 fused_ordering(592) 00:12:42.349 fused_ordering(593) 00:12:42.349 fused_ordering(594) 00:12:42.349 fused_ordering(595) 00:12:42.349 fused_ordering(596) 00:12:42.349 fused_ordering(597) 00:12:42.349 fused_ordering(598) 00:12:42.349 fused_ordering(599) 00:12:42.349 fused_ordering(600) 00:12:42.349 fused_ordering(601) 00:12:42.349 fused_ordering(602) 00:12:42.349 fused_ordering(603) 00:12:42.349 fused_ordering(604) 00:12:42.349 fused_ordering(605) 00:12:42.349 fused_ordering(606) 00:12:42.349 fused_ordering(607) 00:12:42.349 fused_ordering(608) 00:12:42.349 fused_ordering(609) 00:12:42.349 fused_ordering(610) 00:12:42.349 fused_ordering(611) 00:12:42.349 fused_ordering(612) 00:12:42.349 fused_ordering(613) 00:12:42.349 fused_ordering(614) 00:12:42.349 fused_ordering(615) 00:12:42.608 fused_ordering(616) 00:12:42.608 fused_ordering(617) 00:12:42.608 fused_ordering(618) 00:12:42.608 fused_ordering(619) 00:12:42.608 fused_ordering(620) 00:12:42.608 fused_ordering(621) 00:12:42.608 fused_ordering(622) 00:12:42.608 fused_ordering(623) 00:12:42.608 fused_ordering(624) 00:12:42.608 fused_ordering(625) 00:12:42.608 fused_ordering(626) 00:12:42.608 fused_ordering(627) 00:12:42.608 fused_ordering(628) 00:12:42.608 fused_ordering(629) 00:12:42.608 fused_ordering(630) 00:12:42.608 fused_ordering(631) 00:12:42.608 fused_ordering(632) 00:12:42.608 fused_ordering(633) 00:12:42.608 fused_ordering(634) 00:12:42.608 fused_ordering(635) 00:12:42.608 fused_ordering(636) 00:12:42.608 fused_ordering(637) 00:12:42.608 fused_ordering(638) 00:12:42.608 fused_ordering(639) 00:12:42.608 fused_ordering(640) 00:12:42.608 fused_ordering(641) 00:12:42.608 fused_ordering(642) 00:12:42.608 fused_ordering(643) 00:12:42.608 fused_ordering(644) 00:12:42.608 fused_ordering(645) 00:12:42.608 fused_ordering(646) 00:12:42.608 fused_ordering(647) 00:12:42.608 fused_ordering(648) 00:12:42.608 fused_ordering(649) 00:12:42.608 fused_ordering(650) 00:12:42.608 fused_ordering(651) 00:12:42.608 fused_ordering(652) 00:12:42.608 fused_ordering(653) 00:12:42.608 fused_ordering(654) 00:12:42.608 fused_ordering(655) 00:12:42.608 fused_ordering(656) 00:12:42.608 fused_ordering(657) 00:12:42.608 fused_ordering(658) 00:12:42.608 fused_ordering(659) 00:12:42.608 fused_ordering(660) 00:12:42.608 fused_ordering(661) 00:12:42.608 fused_ordering(662) 00:12:42.608 fused_ordering(663) 00:12:42.608 fused_ordering(664) 00:12:42.608 fused_ordering(665) 00:12:42.608 fused_ordering(666) 00:12:42.608 fused_ordering(667) 00:12:42.608 fused_ordering(668) 00:12:42.608 fused_ordering(669) 00:12:42.608 fused_ordering(670) 00:12:42.608 fused_ordering(671) 00:12:42.608 fused_ordering(672) 00:12:42.608 fused_ordering(673) 00:12:42.608 fused_ordering(674) 00:12:42.608 fused_ordering(675) 00:12:42.608 fused_ordering(676) 00:12:42.608 fused_ordering(677) 00:12:42.608 fused_ordering(678) 00:12:42.608 fused_ordering(679) 00:12:42.608 fused_ordering(680) 00:12:42.608 fused_ordering(681) 00:12:42.608 fused_ordering(682) 00:12:42.608 fused_ordering(683) 00:12:42.608 fused_ordering(684) 00:12:42.608 fused_ordering(685) 00:12:42.608 fused_ordering(686) 00:12:42.608 fused_ordering(687) 00:12:42.608 fused_ordering(688) 00:12:42.608 fused_ordering(689) 00:12:42.608 fused_ordering(690) 00:12:42.608 fused_ordering(691) 00:12:42.608 fused_ordering(692) 00:12:42.608 fused_ordering(693) 00:12:42.608 fused_ordering(694) 00:12:42.608 fused_ordering(695) 00:12:42.608 fused_ordering(696) 00:12:42.608 fused_ordering(697) 00:12:42.608 fused_ordering(698) 00:12:42.608 fused_ordering(699) 00:12:42.608 fused_ordering(700) 00:12:42.608 fused_ordering(701) 00:12:42.608 fused_ordering(702) 00:12:42.608 fused_ordering(703) 00:12:42.608 fused_ordering(704) 00:12:42.608 fused_ordering(705) 00:12:42.608 fused_ordering(706) 00:12:42.608 fused_ordering(707) 00:12:42.608 fused_ordering(708) 00:12:42.608 fused_ordering(709) 00:12:42.608 fused_ordering(710) 00:12:42.608 fused_ordering(711) 00:12:42.608 fused_ordering(712) 00:12:42.608 fused_ordering(713) 00:12:42.608 fused_ordering(714) 00:12:42.608 fused_ordering(715) 00:12:42.608 fused_ordering(716) 00:12:42.608 fused_ordering(717) 00:12:42.608 fused_ordering(718) 00:12:42.608 fused_ordering(719) 00:12:42.608 fused_ordering(720) 00:12:42.608 fused_ordering(721) 00:12:42.608 fused_ordering(722) 00:12:42.608 fused_ordering(723) 00:12:42.608 fused_ordering(724) 00:12:42.608 fused_ordering(725) 00:12:42.608 fused_ordering(726) 00:12:42.608 fused_ordering(727) 00:12:42.608 fused_ordering(728) 00:12:42.608 fused_ordering(729) 00:12:42.608 fused_ordering(730) 00:12:42.608 fused_ordering(731) 00:12:42.608 fused_ordering(732) 00:12:42.608 fused_ordering(733) 00:12:42.608 fused_ordering(734) 00:12:42.608 fused_ordering(735) 00:12:42.608 fused_ordering(736) 00:12:42.608 fused_ordering(737) 00:12:42.608 fused_ordering(738) 00:12:42.608 fused_ordering(739) 00:12:42.608 fused_ordering(740) 00:12:42.608 fused_ordering(741) 00:12:42.608 fused_ordering(742) 00:12:42.608 fused_ordering(743) 00:12:42.608 fused_ordering(744) 00:12:42.608 fused_ordering(745) 00:12:42.608 fused_ordering(746) 00:12:42.608 fused_ordering(747) 00:12:42.608 fused_ordering(748) 00:12:42.608 fused_ordering(749) 00:12:42.608 fused_ordering(750) 00:12:42.608 fused_ordering(751) 00:12:42.608 fused_ordering(752) 00:12:42.608 fused_ordering(753) 00:12:42.608 fused_ordering(754) 00:12:42.608 fused_ordering(755) 00:12:42.608 fused_ordering(756) 00:12:42.608 fused_ordering(757) 00:12:42.608 fused_ordering(758) 00:12:42.608 fused_ordering(759) 00:12:42.608 fused_ordering(760) 00:12:42.608 fused_ordering(761) 00:12:42.608 fused_ordering(762) 00:12:42.608 fused_ordering(763) 00:12:42.608 fused_ordering(764) 00:12:42.608 fused_ordering(765) 00:12:42.608 fused_ordering(766) 00:12:42.608 fused_ordering(767) 00:12:42.608 fused_ordering(768) 00:12:42.608 fused_ordering(769) 00:12:42.608 fused_ordering(770) 00:12:42.608 fused_ordering(771) 00:12:42.608 fused_ordering(772) 00:12:42.608 fused_ordering(773) 00:12:42.608 fused_ordering(774) 00:12:42.608 fused_ordering(775) 00:12:42.608 fused_ordering(776) 00:12:42.608 fused_ordering(777) 00:12:42.608 fused_ordering(778) 00:12:42.608 fused_ordering(779) 00:12:42.608 fused_ordering(780) 00:12:42.608 fused_ordering(781) 00:12:42.608 fused_ordering(782) 00:12:42.608 fused_ordering(783) 00:12:42.608 fused_ordering(784) 00:12:42.608 fused_ordering(785) 00:12:42.608 fused_ordering(786) 00:12:42.608 fused_ordering(787) 00:12:42.608 fused_ordering(788) 00:12:42.608 fused_ordering(789) 00:12:42.608 fused_ordering(790) 00:12:42.608 fused_ordering(791) 00:12:42.608 fused_ordering(792) 00:12:42.609 fused_ordering(793) 00:12:42.609 fused_ordering(794) 00:12:42.609 fused_ordering(795) 00:12:42.609 fused_ordering(796) 00:12:42.609 fused_ordering(797) 00:12:42.609 fused_ordering(798) 00:12:42.609 fused_ordering(799) 00:12:42.609 fused_ordering(800) 00:12:42.609 fused_ordering(801) 00:12:42.609 fused_ordering(802) 00:12:42.609 fused_ordering(803) 00:12:42.609 fused_ordering(804) 00:12:42.609 fused_ordering(805) 00:12:42.609 fused_ordering(806) 00:12:42.609 fused_ordering(807) 00:12:42.609 fused_ordering(808) 00:12:42.609 fused_ordering(809) 00:12:42.609 fused_ordering(810) 00:12:42.609 fused_ordering(811) 00:12:42.609 fused_ordering(812) 00:12:42.609 fused_ordering(813) 00:12:42.609 fused_ordering(814) 00:12:42.609 fused_ordering(815) 00:12:42.609 fused_ordering(816) 00:12:42.609 fused_ordering(817) 00:12:42.609 fused_ordering(818) 00:12:42.609 fused_ordering(819) 00:12:42.609 fused_ordering(820) 00:12:43.177 fused_o[2024-11-20 10:29:43.749353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad1f0 is same with the state(6) to be set 00:12:43.177 rdering(821) 00:12:43.177 fused_ordering(822) 00:12:43.177 fused_ordering(823) 00:12:43.177 fused_ordering(824) 00:12:43.177 fused_ordering(825) 00:12:43.177 fused_ordering(826) 00:12:43.177 fused_ordering(827) 00:12:43.177 fused_ordering(828) 00:12:43.177 fused_ordering(829) 00:12:43.177 fused_ordering(830) 00:12:43.177 fused_ordering(831) 00:12:43.177 fused_ordering(832) 00:12:43.177 fused_ordering(833) 00:12:43.177 fused_ordering(834) 00:12:43.177 fused_ordering(835) 00:12:43.177 fused_ordering(836) 00:12:43.177 fused_ordering(837) 00:12:43.177 fused_ordering(838) 00:12:43.177 fused_ordering(839) 00:12:43.177 fused_ordering(840) 00:12:43.177 fused_ordering(841) 00:12:43.177 fused_ordering(842) 00:12:43.177 fused_ordering(843) 00:12:43.177 fused_ordering(844) 00:12:43.177 fused_ordering(845) 00:12:43.177 fused_ordering(846) 00:12:43.177 fused_ordering(847) 00:12:43.177 fused_ordering(848) 00:12:43.177 fused_ordering(849) 00:12:43.177 fused_ordering(850) 00:12:43.177 fused_ordering(851) 00:12:43.177 fused_ordering(852) 00:12:43.177 fused_ordering(853) 00:12:43.177 fused_ordering(854) 00:12:43.177 fused_ordering(855) 00:12:43.177 fused_ordering(856) 00:12:43.177 fused_ordering(857) 00:12:43.177 fused_ordering(858) 00:12:43.177 fused_ordering(859) 00:12:43.177 fused_ordering(860) 00:12:43.177 fused_ordering(861) 00:12:43.177 fused_ordering(862) 00:12:43.177 fused_ordering(863) 00:12:43.177 fused_ordering(864) 00:12:43.177 fused_ordering(865) 00:12:43.177 fused_ordering(866) 00:12:43.177 fused_ordering(867) 00:12:43.177 fused_ordering(868) 00:12:43.177 fused_ordering(869) 00:12:43.177 fused_ordering(870) 00:12:43.177 fused_ordering(871) 00:12:43.177 fused_ordering(872) 00:12:43.177 fused_ordering(873) 00:12:43.177 fused_ordering(874) 00:12:43.177 fused_ordering(875) 00:12:43.177 fused_ordering(876) 00:12:43.177 fused_ordering(877) 00:12:43.177 fused_ordering(878) 00:12:43.177 fused_ordering(879) 00:12:43.177 fused_ordering(880) 00:12:43.177 fused_ordering(881) 00:12:43.177 fused_ordering(882) 00:12:43.177 fused_ordering(883) 00:12:43.177 fused_ordering(884) 00:12:43.177 fused_ordering(885) 00:12:43.177 fused_ordering(886) 00:12:43.177 fused_ordering(887) 00:12:43.177 fused_ordering(888) 00:12:43.177 fused_ordering(889) 00:12:43.177 fused_ordering(890) 00:12:43.177 fused_ordering(891) 00:12:43.177 fused_ordering(892) 00:12:43.177 fused_ordering(893) 00:12:43.177 fused_ordering(894) 00:12:43.177 fused_ordering(895) 00:12:43.177 fused_ordering(896) 00:12:43.177 fused_ordering(897) 00:12:43.177 fused_ordering(898) 00:12:43.177 fused_ordering(899) 00:12:43.177 fused_ordering(900) 00:12:43.177 fused_ordering(901) 00:12:43.177 fused_ordering(902) 00:12:43.177 fused_ordering(903) 00:12:43.177 fused_ordering(904) 00:12:43.177 fused_ordering(905) 00:12:43.177 fused_ordering(906) 00:12:43.177 fused_ordering(907) 00:12:43.177 fused_ordering(908) 00:12:43.177 fused_ordering(909) 00:12:43.177 fused_ordering(910) 00:12:43.177 fused_ordering(911) 00:12:43.177 fused_ordering(912) 00:12:43.177 fused_ordering(913) 00:12:43.177 fused_ordering(914) 00:12:43.177 fused_ordering(915) 00:12:43.177 fused_ordering(916) 00:12:43.177 fused_ordering(917) 00:12:43.177 fused_ordering(918) 00:12:43.177 fused_ordering(919) 00:12:43.177 fused_ordering(920) 00:12:43.177 fused_ordering(921) 00:12:43.177 fused_ordering(922) 00:12:43.177 fused_ordering(923) 00:12:43.177 fused_ordering(924) 00:12:43.177 fused_ordering(925) 00:12:43.177 fused_ordering(926) 00:12:43.177 fused_ordering(927) 00:12:43.177 fused_ordering(928) 00:12:43.177 fused_ordering(929) 00:12:43.177 fused_ordering(930) 00:12:43.177 fused_ordering(931) 00:12:43.177 fused_ordering(932) 00:12:43.177 fused_ordering(933) 00:12:43.177 fused_ordering(934) 00:12:43.177 fused_ordering(935) 00:12:43.177 fused_ordering(936) 00:12:43.177 fused_ordering(937) 00:12:43.177 fused_ordering(938) 00:12:43.177 fused_ordering(939) 00:12:43.177 fused_ordering(940) 00:12:43.177 fused_ordering(941) 00:12:43.177 fused_ordering(942) 00:12:43.177 fused_ordering(943) 00:12:43.177 fused_ordering(944) 00:12:43.177 fused_ordering(945) 00:12:43.177 fused_ordering(946) 00:12:43.177 fused_ordering(947) 00:12:43.177 fused_ordering(948) 00:12:43.177 fused_ordering(949) 00:12:43.177 fused_ordering(950) 00:12:43.177 fused_ordering(951) 00:12:43.177 fused_ordering(952) 00:12:43.177 fused_ordering(953) 00:12:43.177 fused_ordering(954) 00:12:43.177 fused_ordering(955) 00:12:43.177 fused_ordering(956) 00:12:43.177 fused_ordering(957) 00:12:43.177 fused_ordering(958) 00:12:43.177 fused_ordering(959) 00:12:43.177 fused_ordering(960) 00:12:43.177 fused_ordering(961) 00:12:43.177 fused_ordering(962) 00:12:43.177 fused_ordering(963) 00:12:43.177 fused_ordering(964) 00:12:43.177 fused_ordering(965) 00:12:43.177 fused_ordering(966) 00:12:43.177 fused_ordering(967) 00:12:43.177 fused_ordering(968) 00:12:43.177 fused_ordering(969) 00:12:43.177 fused_ordering(970) 00:12:43.177 fused_ordering(971) 00:12:43.177 fused_ordering(972) 00:12:43.177 fused_ordering(973) 00:12:43.177 fused_ordering(974) 00:12:43.177 fused_ordering(975) 00:12:43.177 fused_ordering(976) 00:12:43.177 fused_ordering(977) 00:12:43.177 fused_ordering(978) 00:12:43.177 fused_ordering(979) 00:12:43.177 fused_ordering(980) 00:12:43.177 fused_ordering(981) 00:12:43.177 fused_ordering(982) 00:12:43.177 fused_ordering(983) 00:12:43.177 fused_ordering(984) 00:12:43.177 fused_ordering(985) 00:12:43.177 fused_ordering(986) 00:12:43.177 fused_ordering(987) 00:12:43.177 fused_ordering(988) 00:12:43.177 fused_ordering(989) 00:12:43.177 fused_ordering(990) 00:12:43.177 fused_ordering(991) 00:12:43.177 fused_ordering(992) 00:12:43.177 fused_ordering(993) 00:12:43.177 fused_ordering(994) 00:12:43.177 fused_ordering(995) 00:12:43.177 fused_ordering(996) 00:12:43.177 fused_ordering(997) 00:12:43.177 fused_ordering(998) 00:12:43.177 fused_ordering(999) 00:12:43.177 fused_ordering(1000) 00:12:43.177 fused_ordering(1001) 00:12:43.177 fused_ordering(1002) 00:12:43.177 fused_ordering(1003) 00:12:43.177 fused_ordering(1004) 00:12:43.177 fused_ordering(1005) 00:12:43.177 fused_ordering(1006) 00:12:43.177 fused_ordering(1007) 00:12:43.177 fused_ordering(1008) 00:12:43.177 fused_ordering(1009) 00:12:43.177 fused_ordering(1010) 00:12:43.177 fused_ordering(1011) 00:12:43.177 fused_ordering(1012) 00:12:43.177 fused_ordering(1013) 00:12:43.177 fused_ordering(1014) 00:12:43.177 fused_ordering(1015) 00:12:43.177 fused_ordering(1016) 00:12:43.177 fused_ordering(1017) 00:12:43.177 fused_ordering(1018) 00:12:43.177 fused_ordering(1019) 00:12:43.177 fused_ordering(1020) 00:12:43.177 fused_ordering(1021) 00:12:43.177 fused_ordering(1022) 00:12:43.177 fused_ordering(1023) 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.177 rmmod nvme_tcp 00:12:43.177 rmmod nvme_fabrics 00:12:43.177 rmmod nvme_keyring 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3435147 ']' 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3435147 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3435147 ']' 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3435147 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435147 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435147' 00:12:43.177 killing process with pid 3435147 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3435147 00:12:43.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3435147 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.438 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.976 00:12:45.976 real 0m10.647s 00:12:45.976 user 0m4.883s 00:12:45.976 sys 0m5.869s 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.976 ************************************ 00:12:45.976 END TEST nvmf_fused_ordering 00:12:45.976 ************************************ 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.976 ************************************ 00:12:45.976 START TEST nvmf_ns_masking 00:12:45.976 ************************************ 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:45.976 * Looking for test storage... 00:12:45.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.976 --rc genhtml_branch_coverage=1 00:12:45.976 --rc genhtml_function_coverage=1 00:12:45.976 --rc genhtml_legend=1 00:12:45.976 --rc geninfo_all_blocks=1 00:12:45.976 --rc geninfo_unexecuted_blocks=1 00:12:45.976 00:12:45.976 ' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.976 --rc genhtml_branch_coverage=1 00:12:45.976 --rc genhtml_function_coverage=1 00:12:45.976 --rc genhtml_legend=1 00:12:45.976 --rc geninfo_all_blocks=1 00:12:45.976 --rc geninfo_unexecuted_blocks=1 00:12:45.976 00:12:45.976 ' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.976 --rc genhtml_branch_coverage=1 00:12:45.976 --rc genhtml_function_coverage=1 00:12:45.976 --rc genhtml_legend=1 00:12:45.976 --rc geninfo_all_blocks=1 00:12:45.976 --rc geninfo_unexecuted_blocks=1 00:12:45.976 00:12:45.976 ' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.976 --rc genhtml_branch_coverage=1 00:12:45.976 --rc genhtml_function_coverage=1 00:12:45.976 --rc genhtml_legend=1 00:12:45.976 --rc geninfo_all_blocks=1 00:12:45.976 --rc geninfo_unexecuted_blocks=1 00:12:45.976 00:12:45.976 ' 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.976 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5630b484-7a24-4911-824a-5cb80d2d71fd 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=65a61085-04d5-4a20-ad08-c136438de143 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=163a3adc-f16d-4583-be79-aec1a499d392 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.977 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:52.548 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.549 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.549 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.549 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.549 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:12:52.549 00:12:52.549 --- 10.0.0.2 ping statistics --- 00:12:52.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.549 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:52.549 00:12:52.549 --- 10.0.0.1 ping statistics --- 00:12:52.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.549 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3439148 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3439148 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3439148 ']' 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.549 [2024-11-20 10:29:52.434310] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:12:52.549 [2024-11-20 10:29:52.434358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.549 [2024-11-20 10:29:52.513985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.549 [2024-11-20 10:29:52.553678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.549 [2024-11-20 10:29:52.553714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.549 [2024-11-20 10:29:52.553720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.549 [2024-11-20 10:29:52.553727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.549 [2024-11-20 10:29:52.553731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.549 [2024-11-20 10:29:52.554294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:52.549 [2024-11-20 10:29:52.861975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:52.549 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:52.549 Malloc1 00:12:52.549 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:52.808 Malloc2 00:12:52.808 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.808 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:53.066 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.324 [2024-11-20 10:29:53.890047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.324 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:53.324 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 163a3adc-f16d-4583-be79-aec1a499d392 -a 10.0.0.2 -s 4420 -i 4 00:12:53.583 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.583 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.583 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.583 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.583 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:55.486 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:55.745 [ 0]:0x1 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9706e69fa25e43beb3a7e407243a3e21 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9706e69fa25e43beb3a7e407243a3e21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.745 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:56.004 [ 0]:0x1 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9706e69fa25e43beb3a7e407243a3e21 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9706e69fa25e43beb3a7e407243a3e21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:56.004 [ 1]:0x2 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:56.004 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.262 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.262 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:56.521 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:56.521 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 163a3adc-f16d-4583-be79-aec1a499d392 -a 10.0.0.2 -s 4420 -i 4 00:12:56.779 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:56.779 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.779 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.779 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:56.779 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:56.779 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.689 [ 0]:0x2 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.689 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.953 [ 0]:0x1 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.953 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9706e69fa25e43beb3a7e407243a3e21 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9706e69fa25e43beb3a7e407243a3e21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.212 [ 1]:0x2 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.212 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.471 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.471 [ 0]:0x2 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.471 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 163a3adc-f16d-4583-be79-aec1a499d392 -a 10.0.0.2 -s 4420 -i 4 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:59.730 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.264 [ 0]:0x1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9706e69fa25e43beb3a7e407243a3e21 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9706e69fa25e43beb3a7e407243a3e21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.264 [ 1]:0x2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.264 [ 0]:0x2 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:02.264 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:02.523 [2024-11-20 10:30:03.052080] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:02.523 request: 00:13:02.523 { 00:13:02.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.523 "nsid": 2, 00:13:02.523 "host": "nqn.2016-06.io.spdk:host1", 00:13:02.523 "method": "nvmf_ns_remove_host", 00:13:02.523 "req_id": 1 00:13:02.523 } 00:13:02.523 Got JSON-RPC error response 00:13:02.523 response: 00:13:02.523 { 00:13:02.523 "code": -32602, 00:13:02.523 "message": "Invalid parameters" 00:13:02.523 } 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:02.523 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.524 [ 0]:0x2 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2551ca882be94b278ddc9a6ab6d9c672 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2551ca882be94b278ddc9a6ab6d9c672 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:02.524 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3441094 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3441094 /var/tmp/host.sock 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3441094 ']' 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:02.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.783 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:02.783 [2024-11-20 10:30:03.335277] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:13:02.783 [2024-11-20 10:30:03.335323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441094 ] 00:13:02.783 [2024-11-20 10:30:03.411714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.783 [2024-11-20 10:30:03.452714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.042 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.042 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:03.042 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.300 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.558 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5630b484-7a24-4911-824a-5cb80d2d71fd 00:13:03.558 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:03.558 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5630B4847A244911824A5CB80D2D71FD -i 00:13:03.816 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 65a61085-04d5-4a20-ad08-c136438de143 00:13:03.816 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:03.816 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 65A6108504D54A20AD08C136438DE143 -i 00:13:03.816 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.074 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:04.332 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:04.332 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:04.591 nvme0n1 00:13:04.591 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:04.591 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:05.157 nvme1n2 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:05.157 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:05.415 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5630b484-7a24-4911-824a-5cb80d2d71fd == \5\6\3\0\b\4\8\4\-\7\a\2\4\-\4\9\1\1\-\8\2\4\a\-\5\c\b\8\0\d\2\d\7\1\f\d ]] 00:13:05.415 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:05.415 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:05.415 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:05.674 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 65a61085-04d5-4a20-ad08-c136438de143 == \6\5\a\6\1\0\8\5\-\0\4\d\5\-\4\a\2\0\-\a\d\0\8\-\c\1\3\6\4\3\8\d\e\1\4\3 ]] 00:13:05.674 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.933 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 5630b484-7a24-4911-824a-5cb80d2d71fd 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5630B4847A244911824A5CB80D2D71FD 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5630B4847A244911824A5CB80D2D71FD 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5630B4847A244911824A5CB80D2D71FD 00:13:06.192 [2024-11-20 10:30:06.874573] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:06.192 [2024-11-20 10:30:06.874607] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:06.192 [2024-11-20 10:30:06.874615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.192 request: 00:13:06.192 { 00:13:06.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.192 "namespace": { 00:13:06.192 "bdev_name": "invalid", 00:13:06.192 "nsid": 1, 00:13:06.192 "nguid": "5630B4847A244911824A5CB80D2D71FD", 00:13:06.192 "no_auto_visible": false 00:13:06.192 }, 00:13:06.192 "method": "nvmf_subsystem_add_ns", 00:13:06.192 "req_id": 1 00:13:06.192 } 00:13:06.192 Got JSON-RPC error response 00:13:06.192 response: 00:13:06.192 { 00:13:06.192 "code": -32602, 00:13:06.192 "message": "Invalid parameters" 00:13:06.192 } 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 5630b484-7a24-4911-824a-5cb80d2d71fd 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:06.192 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5630B4847A244911824A5CB80D2D71FD -i 00:13:06.453 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3441094 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3441094 ']' 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3441094 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3441094 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3441094' 00:13:08.986 killing process with pid 3441094 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3441094 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3441094 00:13:08.986 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.244 rmmod nvme_tcp 00:13:09.244 rmmod nvme_fabrics 00:13:09.244 rmmod nvme_keyring 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3439148 ']' 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3439148 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3439148 ']' 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3439148 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.244 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3439148 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3439148' 00:13:09.503 killing process with pid 3439148 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3439148 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3439148 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.503 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.138 00:13:12.138 real 0m26.103s 00:13:12.138 user 0m31.353s 00:13:12.138 sys 0m7.194s 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 ************************************ 00:13:12.138 END TEST nvmf_ns_masking 00:13:12.138 ************************************ 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 ************************************ 00:13:12.138 START TEST nvmf_nvme_cli 00:13:12.138 ************************************ 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.138 * Looking for test storage... 00:13:12.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:12.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.138 --rc genhtml_branch_coverage=1 00:13:12.138 --rc genhtml_function_coverage=1 00:13:12.138 --rc genhtml_legend=1 00:13:12.138 --rc geninfo_all_blocks=1 00:13:12.138 --rc geninfo_unexecuted_blocks=1 00:13:12.138 00:13:12.138 ' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:12.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.138 --rc genhtml_branch_coverage=1 00:13:12.138 --rc genhtml_function_coverage=1 00:13:12.138 --rc genhtml_legend=1 00:13:12.138 --rc geninfo_all_blocks=1 00:13:12.138 --rc geninfo_unexecuted_blocks=1 00:13:12.138 00:13:12.138 ' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:12.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.138 --rc genhtml_branch_coverage=1 00:13:12.138 --rc genhtml_function_coverage=1 00:13:12.138 --rc genhtml_legend=1 00:13:12.138 --rc geninfo_all_blocks=1 00:13:12.138 --rc geninfo_unexecuted_blocks=1 00:13:12.138 00:13:12.138 ' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:12.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.138 --rc genhtml_branch_coverage=1 00:13:12.138 --rc genhtml_function_coverage=1 00:13:12.138 --rc genhtml_legend=1 00:13:12.138 --rc geninfo_all_blocks=1 00:13:12.138 --rc geninfo_unexecuted_blocks=1 00:13:12.138 00:13:12.138 ' 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.138 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.139 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:18.705 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:18.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:18.705 Found net devices under 0000:86:00.0: cvl_0_0 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:18.705 Found net devices under 0000:86:00.1: cvl_0_1 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.705 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:13:18.706 00:13:18.706 --- 10.0.0.2 ping statistics --- 00:13:18.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.706 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:13:18.706 00:13:18.706 --- 10.0.0.1 ping statistics --- 00:13:18.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.706 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3446170 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3446170 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3446170 ']' 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 [2024-11-20 10:30:18.632340] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:13:18.706 [2024-11-20 10:30:18.632387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.706 [2024-11-20 10:30:18.712821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.706 [2024-11-20 10:30:18.757400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.706 [2024-11-20 10:30:18.757437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.706 [2024-11-20 10:30:18.757445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.706 [2024-11-20 10:30:18.757451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.706 [2024-11-20 10:30:18.757456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.706 [2024-11-20 10:30:18.758911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.706 [2024-11-20 10:30:18.759055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.706 [2024-11-20 10:30:18.759090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.706 [2024-11-20 10:30:18.759091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 [2024-11-20 10:30:18.897226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 Malloc0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 Malloc1 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 [2024-11-20 10:30:18.993695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.706 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:18.706 00:13:18.706 Discovery Log Number of Records 2, Generation counter 2 00:13:18.706 =====Discovery Log Entry 0====== 00:13:18.706 trtype: tcp 00:13:18.706 adrfam: ipv4 00:13:18.706 subtype: current discovery subsystem 00:13:18.706 treq: not required 00:13:18.706 portid: 0 00:13:18.706 trsvcid: 4420 00:13:18.706 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:18.706 traddr: 10.0.0.2 00:13:18.706 eflags: explicit discovery connections, duplicate discovery information 00:13:18.706 sectype: none 00:13:18.706 =====Discovery Log Entry 1====== 00:13:18.706 trtype: tcp 00:13:18.706 adrfam: ipv4 00:13:18.706 subtype: nvme subsystem 00:13:18.706 treq: not required 00:13:18.706 portid: 0 00:13:18.706 trsvcid: 4420 00:13:18.706 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:18.706 traddr: 10.0.0.2 00:13:18.706 eflags: none 00:13:18.706 sectype: none 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:18.706 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.640 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:19.640 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.640 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.640 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:19.640 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:19.640 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:22.173 /dev/nvme0n2 ]] 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.173 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.174 rmmod nvme_tcp 00:13:22.174 rmmod nvme_fabrics 00:13:22.174 rmmod nvme_keyring 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3446170 ']' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3446170 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3446170 ']' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3446170 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446170 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446170' 00:13:22.174 killing process with pid 3446170 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3446170 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3446170 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.174 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.711 00:13:24.711 real 0m12.571s 00:13:24.711 user 0m18.071s 00:13:24.711 sys 0m5.120s 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.711 ************************************ 00:13:24.711 END TEST nvmf_nvme_cli 00:13:24.711 ************************************ 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.711 10:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.711 ************************************ 00:13:24.711 START TEST nvmf_vfio_user 00:13:24.711 ************************************ 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:24.711 * Looking for test storage... 00:13:24.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.711 --rc genhtml_branch_coverage=1 00:13:24.711 --rc genhtml_function_coverage=1 00:13:24.711 --rc genhtml_legend=1 00:13:24.711 --rc geninfo_all_blocks=1 00:13:24.711 --rc geninfo_unexecuted_blocks=1 00:13:24.711 00:13:24.711 ' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.711 --rc genhtml_branch_coverage=1 00:13:24.711 --rc genhtml_function_coverage=1 00:13:24.711 --rc genhtml_legend=1 00:13:24.711 --rc geninfo_all_blocks=1 00:13:24.711 --rc geninfo_unexecuted_blocks=1 00:13:24.711 00:13:24.711 ' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.711 --rc genhtml_branch_coverage=1 00:13:24.711 --rc genhtml_function_coverage=1 00:13:24.711 --rc genhtml_legend=1 00:13:24.711 --rc geninfo_all_blocks=1 00:13:24.711 --rc geninfo_unexecuted_blocks=1 00:13:24.711 00:13:24.711 ' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.711 --rc genhtml_branch_coverage=1 00:13:24.711 --rc genhtml_function_coverage=1 00:13:24.711 --rc genhtml_legend=1 00:13:24.711 --rc geninfo_all_blocks=1 00:13:24.711 --rc geninfo_unexecuted_blocks=1 00:13:24.711 00:13:24.711 ' 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.711 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3447458 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3447458' 00:13:24.712 Process pid: 3447458 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3447458 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3447458 ']' 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.712 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 [2024-11-20 10:30:25.267122] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:13:24.712 [2024-11-20 10:30:25.267170] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.712 [2024-11-20 10:30:25.341458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.712 [2024-11-20 10:30:25.384414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.712 [2024-11-20 10:30:25.384451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.712 [2024-11-20 10:30:25.384458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.712 [2024-11-20 10:30:25.384464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.712 [2024-11-20 10:30:25.384470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.712 [2024-11-20 10:30:25.385937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.712 [2024-11-20 10:30:25.386049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.712 [2024-11-20 10:30:25.386083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.712 [2024-11-20 10:30:25.386084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.971 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.971 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:24.971 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:25.906 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:26.163 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:26.163 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:26.163 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.163 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:26.164 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:26.421 Malloc1 00:13:26.421 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:26.421 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:26.680 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:26.938 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.938 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:26.938 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.196 Malloc2 00:13:27.196 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:27.454 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:27.454 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:27.715 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:27.715 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:27.715 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.715 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:27.715 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:27.715 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:27.715 [2024-11-20 10:30:28.357344] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:13:27.715 [2024-11-20 10:30:28.357370] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447943 ] 00:13:27.715 [2024-11-20 10:30:28.395859] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:27.715 [2024-11-20 10:30:28.400159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:27.715 [2024-11-20 10:30:28.400181] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc5e61b0000 00:13:27.715 [2024-11-20 10:30:28.401156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.402156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.403162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.404167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.405172] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.406176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.407179] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.408177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.715 [2024-11-20 10:30:28.409192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:27.715 [2024-11-20 10:30:28.409202] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc5e61a5000 00:13:27.715 [2024-11-20 10:30:28.410146] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.715 [2024-11-20 10:30:28.424394] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:27.715 [2024-11-20 10:30:28.424417] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:27.715 [2024-11-20 10:30:28.427308] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:27.715 [2024-11-20 10:30:28.427343] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:27.715 [2024-11-20 10:30:28.427407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:27.715 [2024-11-20 10:30:28.427421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:27.715 [2024-11-20 10:30:28.427427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:27.715 [2024-11-20 10:30:28.428303] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:27.715 [2024-11-20 10:30:28.428312] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:27.715 [2024-11-20 10:30:28.428319] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:27.716 [2024-11-20 10:30:28.429307] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:27.716 [2024-11-20 10:30:28.429315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:27.716 [2024-11-20 10:30:28.429322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:27.716 [2024-11-20 10:30:28.430313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:27.716 [2024-11-20 10:30:28.430320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:27.716 [2024-11-20 10:30:28.431320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:27.716 [2024-11-20 10:30:28.431327] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:27.716 [2024-11-20 10:30:28.431331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:27.716 [2024-11-20 10:30:28.431337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:27.716 [2024-11-20 10:30:28.431445] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:27.716 [2024-11-20 10:30:28.431451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:27.716 [2024-11-20 10:30:28.431456] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:27.716 [2024-11-20 10:30:28.432333] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:27.716 [2024-11-20 10:30:28.433334] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:27.716 [2024-11-20 10:30:28.434340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.716 [2024-11-20 10:30:28.435341] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.716 [2024-11-20 10:30:28.435404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:27.716 [2024-11-20 10:30:28.436353] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:27.716 [2024-11-20 10:30:28.436360] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:27.716 [2024-11-20 10:30:28.436365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:27.716 [2024-11-20 10:30:28.436392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.716 [2024-11-20 10:30:28.436412] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.716 [2024-11-20 10:30:28.436415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.716 [2024-11-20 10:30:28.436427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.716 [2024-11-20 10:30:28.436473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:27.716 [2024-11-20 10:30:28.436481] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:27.716 [2024-11-20 10:30:28.436486] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:27.716 [2024-11-20 10:30:28.436490] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:27.716 [2024-11-20 10:30:28.436495] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:27.716 [2024-11-20 10:30:28.436500] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:27.716 [2024-11-20 10:30:28.436505] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:27.716 [2024-11-20 10:30:28.436509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:27.716 [2024-11-20 10:30:28.436536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:27.716 [2024-11-20 10:30:28.436546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.716 [2024-11-20 10:30:28.436554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.716 [2024-11-20 10:30:28.436561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.716 [2024-11-20 10:30:28.436569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.716 [2024-11-20 10:30:28.436573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:27.716 [2024-11-20 10:30:28.436598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:27.716 [2024-11-20 10:30:28.436605] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:27.716 [2024-11-20 10:30:28.436610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.716 [2024-11-20 10:30:28.436640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:27.716 [2024-11-20 10:30:28.436691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436704] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:27.716 [2024-11-20 10:30:28.436708] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:27.716 [2024-11-20 10:30:28.436712] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.716 [2024-11-20 10:30:28.436717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:27.716 [2024-11-20 10:30:28.436729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:27.716 [2024-11-20 10:30:28.436736] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:27.716 [2024-11-20 10:30:28.436744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:27.716 [2024-11-20 10:30:28.436758] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.716 [2024-11-20 10:30:28.436762] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.716 [2024-11-20 10:30:28.436765] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.717 [2024-11-20 10:30:28.436770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.436789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.436799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436812] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.717 [2024-11-20 10:30:28.436816] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.717 [2024-11-20 10:30:28.436819] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.717 [2024-11-20 10:30:28.436825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.436834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.436842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436873] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:27.717 [2024-11-20 10:30:28.436877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:27.717 [2024-11-20 10:30:28.436882] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:27.717 [2024-11-20 10:30:28.436898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.436910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.436920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.436931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.436942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.436958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.436968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.436979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.436990] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:27.717 [2024-11-20 10:30:28.436994] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:27.717 [2024-11-20 10:30:28.436997] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:27.717 [2024-11-20 10:30:28.437000] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:27.717 [2024-11-20 10:30:28.437003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:27.717 [2024-11-20 10:30:28.437009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:27.717 [2024-11-20 10:30:28.437016] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:27.717 [2024-11-20 10:30:28.437020] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:27.717 [2024-11-20 10:30:28.437022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.717 [2024-11-20 10:30:28.437028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.437034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:27.717 [2024-11-20 10:30:28.437038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.717 [2024-11-20 10:30:28.437041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.717 [2024-11-20 10:30:28.437046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.437053] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:27.717 [2024-11-20 10:30:28.437057] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:27.717 [2024-11-20 10:30:28.437060] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.717 [2024-11-20 10:30:28.437065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:27.717 [2024-11-20 10:30:28.437071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.437083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.437092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:27.717 [2024-11-20 10:30:28.437099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:27.717 ===================================================== 00:13:27.717 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.717 ===================================================== 00:13:27.717 Controller Capabilities/Features 00:13:27.717 ================================ 00:13:27.717 Vendor ID: 4e58 00:13:27.717 Subsystem Vendor ID: 4e58 00:13:27.717 Serial Number: SPDK1 00:13:27.717 Model Number: SPDK bdev Controller 00:13:27.717 Firmware Version: 25.01 00:13:27.717 Recommended Arb Burst: 6 00:13:27.717 IEEE OUI Identifier: 8d 6b 50 00:13:27.717 Multi-path I/O 00:13:27.717 May have multiple subsystem ports: Yes 00:13:27.717 May have multiple controllers: Yes 00:13:27.717 Associated with SR-IOV VF: No 00:13:27.717 Max Data Transfer Size: 131072 00:13:27.717 Max Number of Namespaces: 32 00:13:27.717 Max Number of I/O Queues: 127 00:13:27.717 NVMe Specification Version (VS): 1.3 00:13:27.717 NVMe Specification Version (Identify): 1.3 00:13:27.717 Maximum Queue Entries: 256 00:13:27.717 Contiguous Queues Required: Yes 00:13:27.717 Arbitration Mechanisms Supported 00:13:27.717 Weighted Round Robin: Not Supported 00:13:27.717 Vendor Specific: Not Supported 00:13:27.717 Reset Timeout: 15000 ms 00:13:27.717 Doorbell Stride: 4 bytes 00:13:27.717 NVM Subsystem Reset: Not Supported 00:13:27.717 Command Sets Supported 00:13:27.717 NVM Command Set: Supported 00:13:27.717 Boot Partition: Not Supported 00:13:27.717 Memory Page Size Minimum: 4096 bytes 00:13:27.717 Memory Page Size Maximum: 4096 bytes 00:13:27.717 Persistent Memory Region: Not Supported 00:13:27.717 Optional Asynchronous Events Supported 00:13:27.717 Namespace Attribute Notices: Supported 00:13:27.717 Firmware Activation Notices: Not Supported 00:13:27.717 ANA Change Notices: Not Supported 00:13:27.717 PLE Aggregate Log Change Notices: Not Supported 00:13:27.717 LBA Status Info Alert Notices: Not Supported 00:13:27.717 EGE Aggregate Log Change Notices: Not Supported 00:13:27.717 Normal NVM Subsystem Shutdown event: Not Supported 00:13:27.717 Zone Descriptor Change Notices: Not Supported 00:13:27.717 Discovery Log Change Notices: Not Supported 00:13:27.717 Controller Attributes 00:13:27.717 128-bit Host Identifier: Supported 00:13:27.717 Non-Operational Permissive Mode: Not Supported 00:13:27.717 NVM Sets: Not Supported 00:13:27.717 Read Recovery Levels: Not Supported 00:13:27.718 Endurance Groups: Not Supported 00:13:27.718 Predictable Latency Mode: Not Supported 00:13:27.718 Traffic Based Keep ALive: Not Supported 00:13:27.718 Namespace Granularity: Not Supported 00:13:27.718 SQ Associations: Not Supported 00:13:27.718 UUID List: Not Supported 00:13:27.718 Multi-Domain Subsystem: Not Supported 00:13:27.718 Fixed Capacity Management: Not Supported 00:13:27.718 Variable Capacity Management: Not Supported 00:13:27.718 Delete Endurance Group: Not Supported 00:13:27.718 Delete NVM Set: Not Supported 00:13:27.718 Extended LBA Formats Supported: Not Supported 00:13:27.718 Flexible Data Placement Supported: Not Supported 00:13:27.718 00:13:27.718 Controller Memory Buffer Support 00:13:27.718 ================================ 00:13:27.718 Supported: No 00:13:27.718 00:13:27.718 Persistent Memory Region Support 00:13:27.718 ================================ 00:13:27.718 Supported: No 00:13:27.718 00:13:27.718 Admin Command Set Attributes 00:13:27.718 ============================ 00:13:27.718 Security Send/Receive: Not Supported 00:13:27.718 Format NVM: Not Supported 00:13:27.718 Firmware Activate/Download: Not Supported 00:13:27.718 Namespace Management: Not Supported 00:13:27.718 Device Self-Test: Not Supported 00:13:27.718 Directives: Not Supported 00:13:27.718 NVMe-MI: Not Supported 00:13:27.718 Virtualization Management: Not Supported 00:13:27.718 Doorbell Buffer Config: Not Supported 00:13:27.718 Get LBA Status Capability: Not Supported 00:13:27.718 Command & Feature Lockdown Capability: Not Supported 00:13:27.718 Abort Command Limit: 4 00:13:27.718 Async Event Request Limit: 4 00:13:27.718 Number of Firmware Slots: N/A 00:13:27.718 Firmware Slot 1 Read-Only: N/A 00:13:27.718 Firmware Activation Without Reset: N/A 00:13:27.718 Multiple Update Detection Support: N/A 00:13:27.718 Firmware Update Granularity: No Information Provided 00:13:27.718 Per-Namespace SMART Log: No 00:13:27.718 Asymmetric Namespace Access Log Page: Not Supported 00:13:27.718 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:27.718 Command Effects Log Page: Supported 00:13:27.718 Get Log Page Extended Data: Supported 00:13:27.718 Telemetry Log Pages: Not Supported 00:13:27.718 Persistent Event Log Pages: Not Supported 00:13:27.718 Supported Log Pages Log Page: May Support 00:13:27.718 Commands Supported & Effects Log Page: Not Supported 00:13:27.718 Feature Identifiers & Effects Log Page:May Support 00:13:27.718 NVMe-MI Commands & Effects Log Page: May Support 00:13:27.718 Data Area 4 for Telemetry Log: Not Supported 00:13:27.718 Error Log Page Entries Supported: 128 00:13:27.718 Keep Alive: Supported 00:13:27.718 Keep Alive Granularity: 10000 ms 00:13:27.718 00:13:27.718 NVM Command Set Attributes 00:13:27.718 ========================== 00:13:27.718 Submission Queue Entry Size 00:13:27.718 Max: 64 00:13:27.718 Min: 64 00:13:27.718 Completion Queue Entry Size 00:13:27.718 Max: 16 00:13:27.718 Min: 16 00:13:27.718 Number of Namespaces: 32 00:13:27.718 Compare Command: Supported 00:13:27.718 Write Uncorrectable Command: Not Supported 00:13:27.718 Dataset Management Command: Supported 00:13:27.718 Write Zeroes Command: Supported 00:13:27.718 Set Features Save Field: Not Supported 00:13:27.718 Reservations: Not Supported 00:13:27.718 Timestamp: Not Supported 00:13:27.718 Copy: Supported 00:13:27.718 Volatile Write Cache: Present 00:13:27.718 Atomic Write Unit (Normal): 1 00:13:27.718 Atomic Write Unit (PFail): 1 00:13:27.718 Atomic Compare & Write Unit: 1 00:13:27.718 Fused Compare & Write: Supported 00:13:27.718 Scatter-Gather List 00:13:27.718 SGL Command Set: Supported (Dword aligned) 00:13:27.718 SGL Keyed: Not Supported 00:13:27.718 SGL Bit Bucket Descriptor: Not Supported 00:13:27.718 SGL Metadata Pointer: Not Supported 00:13:27.718 Oversized SGL: Not Supported 00:13:27.718 SGL Metadata Address: Not Supported 00:13:27.718 SGL Offset: Not Supported 00:13:27.718 Transport SGL Data Block: Not Supported 00:13:27.718 Replay Protected Memory Block: Not Supported 00:13:27.718 00:13:27.718 Firmware Slot Information 00:13:27.718 ========================= 00:13:27.718 Active slot: 1 00:13:27.718 Slot 1 Firmware Revision: 25.01 00:13:27.718 00:13:27.718 00:13:27.718 Commands Supported and Effects 00:13:27.718 ============================== 00:13:27.718 Admin Commands 00:13:27.718 -------------- 00:13:27.718 Get Log Page (02h): Supported 00:13:27.718 Identify (06h): Supported 00:13:27.718 Abort (08h): Supported 00:13:27.718 Set Features (09h): Supported 00:13:27.718 Get Features (0Ah): Supported 00:13:27.718 Asynchronous Event Request (0Ch): Supported 00:13:27.718 Keep Alive (18h): Supported 00:13:27.718 I/O Commands 00:13:27.718 ------------ 00:13:27.718 Flush (00h): Supported LBA-Change 00:13:27.718 Write (01h): Supported LBA-Change 00:13:27.718 Read (02h): Supported 00:13:27.718 Compare (05h): Supported 00:13:27.718 Write Zeroes (08h): Supported LBA-Change 00:13:27.718 Dataset Management (09h): Supported LBA-Change 00:13:27.718 Copy (19h): Supported LBA-Change 00:13:27.718 00:13:27.718 Error Log 00:13:27.718 ========= 00:13:27.718 00:13:27.718 Arbitration 00:13:27.718 =========== 00:13:27.718 Arbitration Burst: 1 00:13:27.718 00:13:27.718 Power Management 00:13:27.718 ================ 00:13:27.718 Number of Power States: 1 00:13:27.718 Current Power State: Power State #0 00:13:27.718 Power State #0: 00:13:27.718 Max Power: 0.00 W 00:13:27.718 Non-Operational State: Operational 00:13:27.718 Entry Latency: Not Reported 00:13:27.718 Exit Latency: Not Reported 00:13:27.718 Relative Read Throughput: 0 00:13:27.718 Relative Read Latency: 0 00:13:27.718 Relative Write Throughput: 0 00:13:27.718 Relative Write Latency: 0 00:13:27.718 Idle Power: Not Reported 00:13:27.718 Active Power: Not Reported 00:13:27.718 Non-Operational Permissive Mode: Not Supported 00:13:27.718 00:13:27.718 Health Information 00:13:27.718 ================== 00:13:27.718 Critical Warnings: 00:13:27.718 Available Spare Space: OK 00:13:27.718 Temperature: OK 00:13:27.718 Device Reliability: OK 00:13:27.718 Read Only: No 00:13:27.718 Volatile Memory Backup: OK 00:13:27.718 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:27.718 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:27.718 Available Spare: 0% 00:13:27.718 Available Sp[2024-11-20 10:30:28.437188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:27.718 [2024-11-20 10:30:28.437200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:27.718 [2024-11-20 10:30:28.437227] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:27.718 [2024-11-20 10:30:28.437236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.718 [2024-11-20 10:30:28.437242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.718 [2024-11-20 10:30:28.437247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.718 [2024-11-20 10:30:28.437253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.718 [2024-11-20 10:30:28.438954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.718 [2024-11-20 10:30:28.438964] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:27.718 [2024-11-20 10:30:28.439373] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.718 [2024-11-20 10:30:28.439423] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:27.718 [2024-11-20 10:30:28.439429] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:27.718 [2024-11-20 10:30:28.440373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:27.718 [2024-11-20 10:30:28.440383] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:27.718 [2024-11-20 10:30:28.440429] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:27.719 [2024-11-20 10:30:28.442408] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.977 are Threshold: 0% 00:13:27.977 Life Percentage Used: 0% 00:13:27.977 Data Units Read: 0 00:13:27.977 Data Units Written: 0 00:13:27.977 Host Read Commands: 0 00:13:27.977 Host Write Commands: 0 00:13:27.977 Controller Busy Time: 0 minutes 00:13:27.977 Power Cycles: 0 00:13:27.977 Power On Hours: 0 hours 00:13:27.977 Unsafe Shutdowns: 0 00:13:27.977 Unrecoverable Media Errors: 0 00:13:27.977 Lifetime Error Log Entries: 0 00:13:27.977 Warning Temperature Time: 0 minutes 00:13:27.977 Critical Temperature Time: 0 minutes 00:13:27.977 00:13:27.977 Number of Queues 00:13:27.977 ================ 00:13:27.977 Number of I/O Submission Queues: 127 00:13:27.977 Number of I/O Completion Queues: 127 00:13:27.977 00:13:27.977 Active Namespaces 00:13:27.977 ================= 00:13:27.977 Namespace ID:1 00:13:27.977 Error Recovery Timeout: Unlimited 00:13:27.977 Command Set Identifier: NVM (00h) 00:13:27.977 Deallocate: Supported 00:13:27.977 Deallocated/Unwritten Error: Not Supported 00:13:27.977 Deallocated Read Value: Unknown 00:13:27.977 Deallocate in Write Zeroes: Not Supported 00:13:27.977 Deallocated Guard Field: 0xFFFF 00:13:27.977 Flush: Supported 00:13:27.977 Reservation: Supported 00:13:27.977 Namespace Sharing Capabilities: Multiple Controllers 00:13:27.977 Size (in LBAs): 131072 (0GiB) 00:13:27.977 Capacity (in LBAs): 131072 (0GiB) 00:13:27.977 Utilization (in LBAs): 131072 (0GiB) 00:13:27.977 NGUID: F8EDE390B62940A6A74F262C3DB61645 00:13:27.977 UUID: f8ede390-b629-40a6-a74f-262c3db61645 00:13:27.977 Thin Provisioning: Not Supported 00:13:27.977 Per-NS Atomic Units: Yes 00:13:27.977 Atomic Boundary Size (Normal): 0 00:13:27.977 Atomic Boundary Size (PFail): 0 00:13:27.977 Atomic Boundary Offset: 0 00:13:27.977 Maximum Single Source Range Length: 65535 00:13:27.977 Maximum Copy Length: 65535 00:13:27.977 Maximum Source Range Count: 1 00:13:27.977 NGUID/EUI64 Never Reused: No 00:13:27.977 Namespace Write Protected: No 00:13:27.977 Number of LBA Formats: 1 00:13:27.977 Current LBA Format: LBA Format #00 00:13:27.977 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:27.977 00:13:27.978 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:27.978 [2024-11-20 10:30:28.672776] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.248 Initializing NVMe Controllers 00:13:33.248 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.248 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:33.248 Initialization complete. Launching workers. 00:13:33.248 ======================================================== 00:13:33.248 Latency(us) 00:13:33.248 Device Information : IOPS MiB/s Average min max 00:13:33.248 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39974.00 156.15 3201.88 957.89 10320.53 00:13:33.248 ======================================================== 00:13:33.248 Total : 39974.00 156.15 3201.88 957.89 10320.53 00:13:33.248 00:13:33.248 [2024-11-20 10:30:33.693325] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.248 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:33.248 [2024-11-20 10:30:33.935478] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.516 Initializing NVMe Controllers 00:13:38.516 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:38.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:38.516 Initialization complete. Launching workers. 00:13:38.516 ======================================================== 00:13:38.516 Latency(us) 00:13:38.516 Device Information : IOPS MiB/s Average min max 00:13:38.516 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16008.08 62.53 8001.32 5466.46 15460.48 00:13:38.516 ======================================================== 00:13:38.516 Total : 16008.08 62.53 8001.32 5466.46 15460.48 00:13:38.516 00:13:38.516 [2024-11-20 10:30:38.974484] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.516 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:38.516 [2024-11-20 10:30:39.186489] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.785 [2024-11-20 10:30:44.261250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.785 Initializing NVMe Controllers 00:13:43.785 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:43.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:43.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:43.785 Initialization complete. Launching workers. 00:13:43.785 Starting thread on core 2 00:13:43.785 Starting thread on core 3 00:13:43.785 Starting thread on core 1 00:13:43.785 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:44.043 [2024-11-20 10:30:44.556173] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:47.331 [2024-11-20 10:30:47.623062] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:47.331 Initializing NVMe Controllers 00:13:47.331 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.331 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.332 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:47.332 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:47.332 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:47.332 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:47.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:47.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:47.332 Initialization complete. Launching workers. 00:13:47.332 Starting thread on core 1 with urgent priority queue 00:13:47.332 Starting thread on core 2 with urgent priority queue 00:13:47.332 Starting thread on core 3 with urgent priority queue 00:13:47.332 Starting thread on core 0 with urgent priority queue 00:13:47.332 SPDK bdev Controller (SPDK1 ) core 0: 7458.67 IO/s 13.41 secs/100000 ios 00:13:47.332 SPDK bdev Controller (SPDK1 ) core 1: 8300.67 IO/s 12.05 secs/100000 ios 00:13:47.332 SPDK bdev Controller (SPDK1 ) core 2: 9015.33 IO/s 11.09 secs/100000 ios 00:13:47.332 SPDK bdev Controller (SPDK1 ) core 3: 9478.67 IO/s 10.55 secs/100000 ios 00:13:47.332 ======================================================== 00:13:47.332 00:13:47.332 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:47.332 [2024-11-20 10:30:47.909409] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:47.332 Initializing NVMe Controllers 00:13:47.332 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.332 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.332 Namespace ID: 1 size: 0GB 00:13:47.332 Initialization complete. 00:13:47.332 INFO: using host memory buffer for IO 00:13:47.332 Hello world! 00:13:47.332 [2024-11-20 10:30:47.943625] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:47.332 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:47.591 [2024-11-20 10:30:48.230440] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.527 Initializing NVMe Controllers 00:13:48.527 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.527 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.527 Initialization complete. Launching workers. 00:13:48.527 submit (in ns) avg, min, max = 7146.5, 3240.0, 3999787.0 00:13:48.527 complete (in ns) avg, min, max = 22742.7, 1837.4, 5990652.2 00:13:48.527 00:13:48.527 Submit histogram 00:13:48.527 ================ 00:13:48.527 Range in us Cumulative Count 00:13:48.527 3.228 - 3.242: 0.0061% ( 1) 00:13:48.527 3.242 - 3.256: 0.0122% ( 1) 00:13:48.527 3.256 - 3.270: 0.0182% ( 1) 00:13:48.527 3.270 - 3.283: 0.0547% ( 6) 00:13:48.527 3.283 - 3.297: 0.1156% ( 10) 00:13:48.527 3.297 - 3.311: 0.2555% ( 23) 00:13:48.527 3.311 - 3.325: 0.5231% ( 44) 00:13:48.527 3.325 - 3.339: 0.9853% ( 76) 00:13:48.527 3.339 - 3.353: 2.2809% ( 213) 00:13:48.527 3.353 - 3.367: 6.3926% ( 676) 00:13:48.527 3.367 - 3.381: 11.8363% ( 895) 00:13:48.527 3.381 - 3.395: 18.6667% ( 1123) 00:13:48.527 3.395 - 3.409: 24.8586% ( 1018) 00:13:48.527 3.409 - 3.423: 31.1964% ( 1042) 00:13:48.527 3.423 - 3.437: 36.3299% ( 844) 00:13:48.527 3.437 - 3.450: 42.2663% ( 976) 00:13:48.527 3.450 - 3.464: 47.5944% ( 876) 00:13:48.527 3.464 - 3.478: 51.8095% ( 693) 00:13:48.527 3.478 - 3.492: 56.0367% ( 695) 00:13:48.527 3.492 - 3.506: 61.9488% ( 972) 00:13:48.527 3.506 - 3.520: 68.2136% ( 1030) 00:13:48.527 3.520 - 3.534: 72.5868% ( 719) 00:13:48.527 3.534 - 3.548: 77.5865% ( 822) 00:13:48.527 3.548 - 3.562: 81.5522% ( 652) 00:13:48.527 3.562 - 3.590: 86.2721% ( 776) 00:13:48.527 3.590 - 3.617: 87.5190% ( 205) 00:13:48.527 3.617 - 3.645: 88.0543% ( 88) 00:13:48.527 3.645 - 3.673: 89.4106% ( 223) 00:13:48.527 3.673 - 3.701: 91.1988% ( 294) 00:13:48.527 3.701 - 3.729: 93.0114% ( 298) 00:13:48.527 3.729 - 3.757: 94.7266% ( 282) 00:13:48.527 3.757 - 3.784: 96.3263% ( 263) 00:13:48.527 3.784 - 3.812: 97.6218% ( 213) 00:13:48.527 3.812 - 3.840: 98.4794% ( 141) 00:13:48.527 3.840 - 3.868: 98.9782% ( 82) 00:13:48.527 3.868 - 3.896: 99.3431% ( 60) 00:13:48.527 3.896 - 3.923: 99.5012% ( 26) 00:13:48.527 3.923 - 3.951: 99.5560% ( 9) 00:13:48.527 3.951 - 3.979: 99.5803% ( 4) 00:13:48.527 3.979 - 4.007: 99.5925% ( 2) 00:13:48.527 4.035 - 4.063: 99.5986% ( 1) 00:13:48.527 4.981 - 5.009: 99.6046% ( 1) 00:13:48.527 5.064 - 5.092: 99.6107% ( 1) 00:13:48.527 5.092 - 5.120: 99.6229% ( 2) 00:13:48.527 5.120 - 5.148: 99.6290% ( 1) 00:13:48.527 5.148 - 5.176: 99.6351% ( 1) 00:13:48.527 5.203 - 5.231: 99.6411% ( 1) 00:13:48.527 5.231 - 5.259: 99.6472% ( 1) 00:13:48.527 5.287 - 5.315: 99.6533% ( 1) 00:13:48.527 5.343 - 5.370: 99.6594% ( 1) 00:13:48.527 5.454 - 5.482: 99.6716% ( 2) 00:13:48.527 5.537 - 5.565: 99.6776% ( 1) 00:13:48.527 5.593 - 5.621: 99.6837% ( 1) 00:13:48.527 5.871 - 5.899: 99.6959% ( 2) 00:13:48.527 5.983 - 6.010: 99.7080% ( 2) 00:13:48.527 6.066 - 6.094: 99.7141% ( 1) 00:13:48.527 6.122 - 6.150: 99.7202% ( 1) 00:13:48.527 6.150 - 6.177: 99.7324% ( 2) 00:13:48.527 6.205 - 6.233: 99.7385% ( 1) 00:13:48.527 6.233 - 6.261: 99.7567% ( 3) 00:13:48.527 6.344 - 6.372: 99.7628% ( 1) 00:13:48.527 6.372 - 6.400: 99.7689% ( 1) 00:13:48.527 6.400 - 6.428: 99.7750% ( 1) 00:13:48.527 6.456 - 6.483: 99.7810% ( 1) 00:13:48.527 6.483 - 6.511: 99.7871% ( 1) 00:13:48.527 6.511 - 6.539: 99.7932% ( 1) 00:13:48.527 6.595 - 6.623: 99.7993% ( 1) 00:13:48.527 6.623 - 6.650: 99.8175% ( 3) 00:13:48.527 6.678 - 6.706: 99.8236% ( 1) 00:13:48.527 6.762 - 6.790: 99.8358% ( 2) 00:13:48.527 6.929 - 6.957: 99.8419% ( 1) 00:13:48.527 7.068 - 7.096: 99.8479% ( 1) 00:13:48.527 7.123 - 7.179: 99.8540% ( 1) 00:13:48.527 7.179 - 7.235: 99.8601% ( 1) 00:13:48.527 7.346 - 7.402: 99.8723% ( 2) 00:13:48.527 7.402 - 7.457: 99.8784% ( 1) 00:13:48.527 7.736 - 7.791: 99.8844% ( 1) 00:13:48.527 8.070 - 8.125: 99.8905% ( 1) 00:13:48.527 8.237 - 8.292: 99.8966% ( 1) 00:13:48.527 [2024-11-20 10:30:49.251299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.786 8.904 - 8.960: 99.9027% ( 1) 00:13:48.786 9.405 - 9.461: 99.9088% ( 1) 00:13:48.786 3989.148 - 4017.642: 100.0000% ( 15) 00:13:48.786 00:13:48.786 Complete histogram 00:13:48.786 ================== 00:13:48.786 Range in us Cumulative Count 00:13:48.786 1.837 - 1.850: 0.0061% ( 1) 00:13:48.786 1.850 - 1.864: 0.0122% ( 1) 00:13:48.786 1.864 - 1.878: 0.0608% ( 8) 00:13:48.786 1.878 - 1.892: 0.7177% ( 108) 00:13:48.786 1.892 - 1.906: 2.0072% ( 212) 00:13:48.786 1.906 - 1.920: 2.8526% ( 139) 00:13:48.786 1.920 - 1.934: 9.3486% ( 1068) 00:13:48.786 1.934 - 1.948: 50.0578% ( 6693) 00:13:48.786 1.948 - 1.962: 86.0227% ( 5913) 00:13:48.786 1.962 - 1.976: 96.3080% ( 1691) 00:13:48.786 1.976 - 1.990: 98.8079% ( 411) 00:13:48.786 1.990 - 2.003: 99.1850% ( 62) 00:13:48.786 2.003 - 2.017: 99.2580% ( 12) 00:13:48.786 2.017 - 2.031: 99.2640% ( 1) 00:13:48.786 2.045 - 2.059: 99.2701% ( 1) 00:13:48.786 2.059 - 2.073: 99.2762% ( 1) 00:13:48.786 2.087 - 2.101: 99.2884% ( 2) 00:13:48.786 2.101 - 2.115: 99.2944% ( 1) 00:13:48.786 2.115 - 2.129: 99.3066% ( 2) 00:13:48.786 2.170 - 2.184: 99.3127% ( 1) 00:13:48.786 2.254 - 2.268: 99.3188% ( 1) 00:13:48.786 2.282 - 2.296: 99.3249% ( 1) 00:13:48.786 2.296 - 2.310: 99.3309% ( 1) 00:13:48.786 2.323 - 2.337: 99.3370% ( 1) 00:13:48.786 2.337 - 2.351: 99.3431% ( 1) 00:13:48.786 2.351 - 2.365: 99.3492% ( 1) 00:13:48.786 2.407 - 2.421: 99.3553% ( 1) 00:13:48.786 2.532 - 2.546: 99.3614% ( 1) 00:13:48.786 3.478 - 3.492: 99.3674% ( 1) 00:13:48.786 3.645 - 3.673: 99.3735% ( 1) 00:13:48.786 3.673 - 3.701: 99.3796% ( 1) 00:13:48.786 3.979 - 4.007: 99.3857% ( 1) 00:13:48.786 4.285 - 4.313: 99.3918% ( 1) 00:13:48.786 4.341 - 4.369: 99.4100% ( 3) 00:13:48.786 4.397 - 4.424: 99.4161% ( 1) 00:13:48.786 5.064 - 5.092: 99.4222% ( 1) 00:13:48.786 5.315 - 5.343: 99.4343% ( 2) 00:13:48.786 5.677 - 5.704: 99.4404% ( 1) 00:13:48.786 5.899 - 5.927: 99.4526% ( 2) 00:13:48.786 5.983 - 6.010: 99.4587% ( 1) 00:13:48.786 9.461 - 9.517: 99.4648% ( 1) 00:13:48.786 39.179 - 39.402: 99.4708% ( 1) 00:13:48.786 136.237 - 137.127: 99.4769% ( 1) 00:13:48.786 177.197 - 178.087: 99.4830% ( 1) 00:13:48.786 3989.148 - 4017.642: 99.9939% ( 84) 00:13:48.786 5983.722 - 6012.216: 100.0000% ( 1) 00:13:48.786 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:48.786 [ 00:13:48.786 { 00:13:48.786 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.786 "subtype": "Discovery", 00:13:48.786 "listen_addresses": [], 00:13:48.786 "allow_any_host": true, 00:13:48.786 "hosts": [] 00:13:48.786 }, 00:13:48.786 { 00:13:48.786 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:48.786 "subtype": "NVMe", 00:13:48.786 "listen_addresses": [ 00:13:48.786 { 00:13:48.786 "trtype": "VFIOUSER", 00:13:48.786 "adrfam": "IPv4", 00:13:48.786 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:48.786 "trsvcid": "0" 00:13:48.786 } 00:13:48.786 ], 00:13:48.786 "allow_any_host": true, 00:13:48.786 "hosts": [], 00:13:48.786 "serial_number": "SPDK1", 00:13:48.786 "model_number": "SPDK bdev Controller", 00:13:48.786 "max_namespaces": 32, 00:13:48.786 "min_cntlid": 1, 00:13:48.786 "max_cntlid": 65519, 00:13:48.786 "namespaces": [ 00:13:48.786 { 00:13:48.786 "nsid": 1, 00:13:48.786 "bdev_name": "Malloc1", 00:13:48.786 "name": "Malloc1", 00:13:48.786 "nguid": "F8EDE390B62940A6A74F262C3DB61645", 00:13:48.786 "uuid": "f8ede390-b629-40a6-a74f-262c3db61645" 00:13:48.786 } 00:13:48.786 ] 00:13:48.786 }, 00:13:48.786 { 00:13:48.786 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:48.786 "subtype": "NVMe", 00:13:48.786 "listen_addresses": [ 00:13:48.786 { 00:13:48.786 "trtype": "VFIOUSER", 00:13:48.786 "adrfam": "IPv4", 00:13:48.786 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:48.786 "trsvcid": "0" 00:13:48.786 } 00:13:48.786 ], 00:13:48.786 "allow_any_host": true, 00:13:48.786 "hosts": [], 00:13:48.786 "serial_number": "SPDK2", 00:13:48.786 "model_number": "SPDK bdev Controller", 00:13:48.786 "max_namespaces": 32, 00:13:48.786 "min_cntlid": 1, 00:13:48.786 "max_cntlid": 65519, 00:13:48.786 "namespaces": [ 00:13:48.786 { 00:13:48.786 "nsid": 1, 00:13:48.786 "bdev_name": "Malloc2", 00:13:48.786 "name": "Malloc2", 00:13:48.786 "nguid": "B53C0C88375D442A89707BE52B0BA7FB", 00:13:48.786 "uuid": "b53c0c88-375d-442a-8970-7be52b0ba7fb" 00:13:48.786 } 00:13:48.786 ] 00:13:48.786 } 00:13:48.786 ] 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3451407 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:48.786 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:49.044 [2024-11-20 10:30:49.637473] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:49.044 Malloc3 00:13:49.044 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:49.302 [2024-11-20 10:30:49.894368] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:49.302 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:49.302 Asynchronous Event Request test 00:13:49.302 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:49.302 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:49.302 Registering asynchronous event callbacks... 00:13:49.302 Starting namespace attribute notice tests for all controllers... 00:13:49.302 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:49.302 aer_cb - Changed Namespace 00:13:49.302 Cleaning up... 00:13:49.561 [ 00:13:49.561 { 00:13:49.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:49.561 "subtype": "Discovery", 00:13:49.561 "listen_addresses": [], 00:13:49.561 "allow_any_host": true, 00:13:49.561 "hosts": [] 00:13:49.561 }, 00:13:49.561 { 00:13:49.561 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:49.561 "subtype": "NVMe", 00:13:49.561 "listen_addresses": [ 00:13:49.561 { 00:13:49.561 "trtype": "VFIOUSER", 00:13:49.561 "adrfam": "IPv4", 00:13:49.561 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:49.561 "trsvcid": "0" 00:13:49.561 } 00:13:49.561 ], 00:13:49.561 "allow_any_host": true, 00:13:49.561 "hosts": [], 00:13:49.561 "serial_number": "SPDK1", 00:13:49.561 "model_number": "SPDK bdev Controller", 00:13:49.561 "max_namespaces": 32, 00:13:49.561 "min_cntlid": 1, 00:13:49.561 "max_cntlid": 65519, 00:13:49.561 "namespaces": [ 00:13:49.561 { 00:13:49.561 "nsid": 1, 00:13:49.561 "bdev_name": "Malloc1", 00:13:49.561 "name": "Malloc1", 00:13:49.561 "nguid": "F8EDE390B62940A6A74F262C3DB61645", 00:13:49.561 "uuid": "f8ede390-b629-40a6-a74f-262c3db61645" 00:13:49.561 }, 00:13:49.561 { 00:13:49.561 "nsid": 2, 00:13:49.561 "bdev_name": "Malloc3", 00:13:49.561 "name": "Malloc3", 00:13:49.561 "nguid": "B5914A1ADACD4CEAA7248A2CA8697403", 00:13:49.561 "uuid": "b5914a1a-dacd-4cea-a724-8a2ca8697403" 00:13:49.561 } 00:13:49.561 ] 00:13:49.561 }, 00:13:49.561 { 00:13:49.561 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:49.561 "subtype": "NVMe", 00:13:49.561 "listen_addresses": [ 00:13:49.561 { 00:13:49.561 "trtype": "VFIOUSER", 00:13:49.561 "adrfam": "IPv4", 00:13:49.561 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:49.561 "trsvcid": "0" 00:13:49.561 } 00:13:49.561 ], 00:13:49.561 "allow_any_host": true, 00:13:49.561 "hosts": [], 00:13:49.561 "serial_number": "SPDK2", 00:13:49.561 "model_number": "SPDK bdev Controller", 00:13:49.561 "max_namespaces": 32, 00:13:49.561 "min_cntlid": 1, 00:13:49.561 "max_cntlid": 65519, 00:13:49.561 "namespaces": [ 00:13:49.561 { 00:13:49.561 "nsid": 1, 00:13:49.561 "bdev_name": "Malloc2", 00:13:49.561 "name": "Malloc2", 00:13:49.561 "nguid": "B53C0C88375D442A89707BE52B0BA7FB", 00:13:49.561 "uuid": "b53c0c88-375d-442a-8970-7be52b0ba7fb" 00:13:49.561 } 00:13:49.561 ] 00:13:49.561 } 00:13:49.561 ] 00:13:49.561 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3451407 00:13:49.561 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.561 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:49.561 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:49.561 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:49.561 [2024-11-20 10:30:50.145944] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:13:49.561 [2024-11-20 10:30:50.145985] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451618 ] 00:13:49.562 [2024-11-20 10:30:50.186884] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:49.562 [2024-11-20 10:30:50.191157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.562 [2024-11-20 10:30:50.191182] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb8e5bfe000 00:13:49.562 [2024-11-20 10:30:50.192162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.193167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.194174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.195185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.196194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.197202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.198204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.199213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.562 [2024-11-20 10:30:50.200217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.562 [2024-11-20 10:30:50.200227] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb8e5bf3000 00:13:49.562 [2024-11-20 10:30:50.201178] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.562 [2024-11-20 10:30:50.215399] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:49.562 [2024-11-20 10:30:50.215426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:49.562 [2024-11-20 10:30:50.217475] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:49.562 [2024-11-20 10:30:50.217514] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:49.562 [2024-11-20 10:30:50.217583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:49.562 [2024-11-20 10:30:50.217598] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:49.562 [2024-11-20 10:30:50.217606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:49.562 [2024-11-20 10:30:50.218480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:49.562 [2024-11-20 10:30:50.218489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:49.562 [2024-11-20 10:30:50.218496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:49.562 [2024-11-20 10:30:50.219488] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:49.562 [2024-11-20 10:30:50.219497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:49.562 [2024-11-20 10:30:50.219504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:49.562 [2024-11-20 10:30:50.220498] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:49.562 [2024-11-20 10:30:50.220508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:49.562 [2024-11-20 10:30:50.221502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:49.562 [2024-11-20 10:30:50.221511] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:49.562 [2024-11-20 10:30:50.221515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:49.562 [2024-11-20 10:30:50.221521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:49.562 [2024-11-20 10:30:50.221629] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:49.562 [2024-11-20 10:30:50.221633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:49.562 [2024-11-20 10:30:50.221638] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:49.562 [2024-11-20 10:30:50.222516] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:49.562 [2024-11-20 10:30:50.223518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:49.562 [2024-11-20 10:30:50.224522] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:49.562 [2024-11-20 10:30:50.225528] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:49.562 [2024-11-20 10:30:50.225569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:49.562 [2024-11-20 10:30:50.226543] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:49.562 [2024-11-20 10:30:50.226552] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:49.562 [2024-11-20 10:30:50.226557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.226576] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:49.562 [2024-11-20 10:30:50.226587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.226600] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.562 [2024-11-20 10:30:50.226605] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.562 [2024-11-20 10:30:50.226608] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.562 [2024-11-20 10:30:50.226620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.562 [2024-11-20 10:30:50.232954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:49.562 [2024-11-20 10:30:50.232967] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:49.562 [2024-11-20 10:30:50.232972] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:49.562 [2024-11-20 10:30:50.232975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:49.562 [2024-11-20 10:30:50.232980] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:49.562 [2024-11-20 10:30:50.232987] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:49.562 [2024-11-20 10:30:50.232991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:49.562 [2024-11-20 10:30:50.232995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.233004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.233014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:49.562 [2024-11-20 10:30:50.240952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:49.562 [2024-11-20 10:30:50.240964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.562 [2024-11-20 10:30:50.240972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.562 [2024-11-20 10:30:50.240979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.562 [2024-11-20 10:30:50.240986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.562 [2024-11-20 10:30:50.240991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.240997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.241005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:49.562 [2024-11-20 10:30:50.248954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:49.562 [2024-11-20 10:30:50.248963] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:49.562 [2024-11-20 10:30:50.248969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.248975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.248981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:49.562 [2024-11-20 10:30:50.248989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.562 [2024-11-20 10:30:50.256952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:49.563 [2024-11-20 10:30:50.257011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.257019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.257026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:49.563 [2024-11-20 10:30:50.257030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:49.563 [2024-11-20 10:30:50.257033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.563 [2024-11-20 10:30:50.257039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:49.563 [2024-11-20 10:30:50.264952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:49.563 [2024-11-20 10:30:50.264965] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:49.563 [2024-11-20 10:30:50.264974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.264981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.264987] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.563 [2024-11-20 10:30:50.264991] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.563 [2024-11-20 10:30:50.264994] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.563 [2024-11-20 10:30:50.265000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.563 [2024-11-20 10:30:50.272953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:49.563 [2024-11-20 10:30:50.272967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.272974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.272980] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.563 [2024-11-20 10:30:50.272984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.563 [2024-11-20 10:30:50.272987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.563 [2024-11-20 10:30:50.272993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.563 [2024-11-20 10:30:50.280951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:49.563 [2024-11-20 10:30:50.280961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.280967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.280974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.280979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.280984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.280989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.280993] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:49.563 [2024-11-20 10:30:50.280997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:49.563 [2024-11-20 10:30:50.281002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:49.563 [2024-11-20 10:30:50.281018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:49.563 [2024-11-20 10:30:50.288952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:49.563 [2024-11-20 10:30:50.288964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:49.822 [2024-11-20 10:30:50.296953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:49.822 [2024-11-20 10:30:50.296965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:49.822 [2024-11-20 10:30:50.304953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:49.822 [2024-11-20 10:30:50.304966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.822 [2024-11-20 10:30:50.312954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:49.822 [2024-11-20 10:30:50.312970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:49.822 [2024-11-20 10:30:50.312975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:49.822 [2024-11-20 10:30:50.312978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:49.822 [2024-11-20 10:30:50.312982] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:49.822 [2024-11-20 10:30:50.312985] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:49.822 [2024-11-20 10:30:50.312991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:49.822 [2024-11-20 10:30:50.312997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:49.822 [2024-11-20 10:30:50.313001] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:49.822 [2024-11-20 10:30:50.313006] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.822 [2024-11-20 10:30:50.313012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:49.822 [2024-11-20 10:30:50.313018] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:49.822 [2024-11-20 10:30:50.313022] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.822 [2024-11-20 10:30:50.313025] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.823 [2024-11-20 10:30:50.313030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.823 [2024-11-20 10:30:50.313037] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:49.823 [2024-11-20 10:30:50.313041] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:49.823 [2024-11-20 10:30:50.313044] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.823 [2024-11-20 10:30:50.313049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:49.823 [2024-11-20 10:30:50.320953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:49.823 [2024-11-20 10:30:50.320967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:49.823 [2024-11-20 10:30:50.320976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:49.823 [2024-11-20 10:30:50.320983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:49.823 ===================================================== 00:13:49.823 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:49.823 ===================================================== 00:13:49.823 Controller Capabilities/Features 00:13:49.823 ================================ 00:13:49.823 Vendor ID: 4e58 00:13:49.823 Subsystem Vendor ID: 4e58 00:13:49.823 Serial Number: SPDK2 00:13:49.823 Model Number: SPDK bdev Controller 00:13:49.823 Firmware Version: 25.01 00:13:49.823 Recommended Arb Burst: 6 00:13:49.823 IEEE OUI Identifier: 8d 6b 50 00:13:49.823 Multi-path I/O 00:13:49.823 May have multiple subsystem ports: Yes 00:13:49.823 May have multiple controllers: Yes 00:13:49.823 Associated with SR-IOV VF: No 00:13:49.823 Max Data Transfer Size: 131072 00:13:49.823 Max Number of Namespaces: 32 00:13:49.823 Max Number of I/O Queues: 127 00:13:49.823 NVMe Specification Version (VS): 1.3 00:13:49.823 NVMe Specification Version (Identify): 1.3 00:13:49.823 Maximum Queue Entries: 256 00:13:49.823 Contiguous Queues Required: Yes 00:13:49.823 Arbitration Mechanisms Supported 00:13:49.823 Weighted Round Robin: Not Supported 00:13:49.823 Vendor Specific: Not Supported 00:13:49.823 Reset Timeout: 15000 ms 00:13:49.823 Doorbell Stride: 4 bytes 00:13:49.823 NVM Subsystem Reset: Not Supported 00:13:49.823 Command Sets Supported 00:13:49.823 NVM Command Set: Supported 00:13:49.823 Boot Partition: Not Supported 00:13:49.823 Memory Page Size Minimum: 4096 bytes 00:13:49.823 Memory Page Size Maximum: 4096 bytes 00:13:49.823 Persistent Memory Region: Not Supported 00:13:49.823 Optional Asynchronous Events Supported 00:13:49.823 Namespace Attribute Notices: Supported 00:13:49.823 Firmware Activation Notices: Not Supported 00:13:49.823 ANA Change Notices: Not Supported 00:13:49.823 PLE Aggregate Log Change Notices: Not Supported 00:13:49.823 LBA Status Info Alert Notices: Not Supported 00:13:49.823 EGE Aggregate Log Change Notices: Not Supported 00:13:49.823 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.823 Zone Descriptor Change Notices: Not Supported 00:13:49.823 Discovery Log Change Notices: Not Supported 00:13:49.823 Controller Attributes 00:13:49.823 128-bit Host Identifier: Supported 00:13:49.823 Non-Operational Permissive Mode: Not Supported 00:13:49.823 NVM Sets: Not Supported 00:13:49.823 Read Recovery Levels: Not Supported 00:13:49.823 Endurance Groups: Not Supported 00:13:49.823 Predictable Latency Mode: Not Supported 00:13:49.823 Traffic Based Keep ALive: Not Supported 00:13:49.823 Namespace Granularity: Not Supported 00:13:49.823 SQ Associations: Not Supported 00:13:49.823 UUID List: Not Supported 00:13:49.823 Multi-Domain Subsystem: Not Supported 00:13:49.823 Fixed Capacity Management: Not Supported 00:13:49.823 Variable Capacity Management: Not Supported 00:13:49.823 Delete Endurance Group: Not Supported 00:13:49.823 Delete NVM Set: Not Supported 00:13:49.823 Extended LBA Formats Supported: Not Supported 00:13:49.823 Flexible Data Placement Supported: Not Supported 00:13:49.823 00:13:49.823 Controller Memory Buffer Support 00:13:49.823 ================================ 00:13:49.823 Supported: No 00:13:49.823 00:13:49.823 Persistent Memory Region Support 00:13:49.823 ================================ 00:13:49.823 Supported: No 00:13:49.823 00:13:49.823 Admin Command Set Attributes 00:13:49.823 ============================ 00:13:49.823 Security Send/Receive: Not Supported 00:13:49.823 Format NVM: Not Supported 00:13:49.823 Firmware Activate/Download: Not Supported 00:13:49.823 Namespace Management: Not Supported 00:13:49.823 Device Self-Test: Not Supported 00:13:49.823 Directives: Not Supported 00:13:49.823 NVMe-MI: Not Supported 00:13:49.823 Virtualization Management: Not Supported 00:13:49.823 Doorbell Buffer Config: Not Supported 00:13:49.823 Get LBA Status Capability: Not Supported 00:13:49.823 Command & Feature Lockdown Capability: Not Supported 00:13:49.823 Abort Command Limit: 4 00:13:49.823 Async Event Request Limit: 4 00:13:49.823 Number of Firmware Slots: N/A 00:13:49.823 Firmware Slot 1 Read-Only: N/A 00:13:49.823 Firmware Activation Without Reset: N/A 00:13:49.823 Multiple Update Detection Support: N/A 00:13:49.823 Firmware Update Granularity: No Information Provided 00:13:49.823 Per-Namespace SMART Log: No 00:13:49.823 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.823 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:49.823 Command Effects Log Page: Supported 00:13:49.823 Get Log Page Extended Data: Supported 00:13:49.823 Telemetry Log Pages: Not Supported 00:13:49.823 Persistent Event Log Pages: Not Supported 00:13:49.823 Supported Log Pages Log Page: May Support 00:13:49.823 Commands Supported & Effects Log Page: Not Supported 00:13:49.823 Feature Identifiers & Effects Log Page:May Support 00:13:49.823 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.823 Data Area 4 for Telemetry Log: Not Supported 00:13:49.823 Error Log Page Entries Supported: 128 00:13:49.823 Keep Alive: Supported 00:13:49.823 Keep Alive Granularity: 10000 ms 00:13:49.823 00:13:49.823 NVM Command Set Attributes 00:13:49.823 ========================== 00:13:49.823 Submission Queue Entry Size 00:13:49.823 Max: 64 00:13:49.823 Min: 64 00:13:49.823 Completion Queue Entry Size 00:13:49.823 Max: 16 00:13:49.823 Min: 16 00:13:49.823 Number of Namespaces: 32 00:13:49.823 Compare Command: Supported 00:13:49.823 Write Uncorrectable Command: Not Supported 00:13:49.823 Dataset Management Command: Supported 00:13:49.823 Write Zeroes Command: Supported 00:13:49.823 Set Features Save Field: Not Supported 00:13:49.823 Reservations: Not Supported 00:13:49.823 Timestamp: Not Supported 00:13:49.823 Copy: Supported 00:13:49.823 Volatile Write Cache: Present 00:13:49.823 Atomic Write Unit (Normal): 1 00:13:49.823 Atomic Write Unit (PFail): 1 00:13:49.823 Atomic Compare & Write Unit: 1 00:13:49.823 Fused Compare & Write: Supported 00:13:49.823 Scatter-Gather List 00:13:49.823 SGL Command Set: Supported (Dword aligned) 00:13:49.823 SGL Keyed: Not Supported 00:13:49.823 SGL Bit Bucket Descriptor: Not Supported 00:13:49.823 SGL Metadata Pointer: Not Supported 00:13:49.823 Oversized SGL: Not Supported 00:13:49.823 SGL Metadata Address: Not Supported 00:13:49.823 SGL Offset: Not Supported 00:13:49.823 Transport SGL Data Block: Not Supported 00:13:49.823 Replay Protected Memory Block: Not Supported 00:13:49.823 00:13:49.823 Firmware Slot Information 00:13:49.823 ========================= 00:13:49.823 Active slot: 1 00:13:49.823 Slot 1 Firmware Revision: 25.01 00:13:49.823 00:13:49.823 00:13:49.823 Commands Supported and Effects 00:13:49.823 ============================== 00:13:49.823 Admin Commands 00:13:49.823 -------------- 00:13:49.823 Get Log Page (02h): Supported 00:13:49.823 Identify (06h): Supported 00:13:49.823 Abort (08h): Supported 00:13:49.823 Set Features (09h): Supported 00:13:49.823 Get Features (0Ah): Supported 00:13:49.823 Asynchronous Event Request (0Ch): Supported 00:13:49.823 Keep Alive (18h): Supported 00:13:49.823 I/O Commands 00:13:49.823 ------------ 00:13:49.823 Flush (00h): Supported LBA-Change 00:13:49.823 Write (01h): Supported LBA-Change 00:13:49.823 Read (02h): Supported 00:13:49.823 Compare (05h): Supported 00:13:49.823 Write Zeroes (08h): Supported LBA-Change 00:13:49.823 Dataset Management (09h): Supported LBA-Change 00:13:49.823 Copy (19h): Supported LBA-Change 00:13:49.823 00:13:49.823 Error Log 00:13:49.823 ========= 00:13:49.823 00:13:49.823 Arbitration 00:13:49.823 =========== 00:13:49.823 Arbitration Burst: 1 00:13:49.823 00:13:49.823 Power Management 00:13:49.823 ================ 00:13:49.823 Number of Power States: 1 00:13:49.823 Current Power State: Power State #0 00:13:49.823 Power State #0: 00:13:49.823 Max Power: 0.00 W 00:13:49.823 Non-Operational State: Operational 00:13:49.823 Entry Latency: Not Reported 00:13:49.823 Exit Latency: Not Reported 00:13:49.823 Relative Read Throughput: 0 00:13:49.823 Relative Read Latency: 0 00:13:49.823 Relative Write Throughput: 0 00:13:49.823 Relative Write Latency: 0 00:13:49.823 Idle Power: Not Reported 00:13:49.823 Active Power: Not Reported 00:13:49.823 Non-Operational Permissive Mode: Not Supported 00:13:49.823 00:13:49.823 Health Information 00:13:49.823 ================== 00:13:49.823 Critical Warnings: 00:13:49.823 Available Spare Space: OK 00:13:49.823 Temperature: OK 00:13:49.823 Device Reliability: OK 00:13:49.823 Read Only: No 00:13:49.823 Volatile Memory Backup: OK 00:13:49.823 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:49.823 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:49.823 Available Spare: 0% 00:13:49.823 Available Sp[2024-11-20 10:30:50.321075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:49.823 [2024-11-20 10:30:50.328953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:49.823 [2024-11-20 10:30:50.328983] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:49.824 [2024-11-20 10:30:50.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.824 [2024-11-20 10:30:50.328998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.824 [2024-11-20 10:30:50.329003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.824 [2024-11-20 10:30:50.329009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.824 [2024-11-20 10:30:50.332953] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:49.824 [2024-11-20 10:30:50.332964] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:49.824 [2024-11-20 10:30:50.333078] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:49.824 [2024-11-20 10:30:50.333122] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:49.824 [2024-11-20 10:30:50.333128] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:49.824 [2024-11-20 10:30:50.334086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:49.824 [2024-11-20 10:30:50.334097] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:49.824 [2024-11-20 10:30:50.334151] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:49.824 [2024-11-20 10:30:50.335141] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.824 are Threshold: 0% 00:13:49.824 Life Percentage Used: 0% 00:13:49.824 Data Units Read: 0 00:13:49.824 Data Units Written: 0 00:13:49.824 Host Read Commands: 0 00:13:49.824 Host Write Commands: 0 00:13:49.824 Controller Busy Time: 0 minutes 00:13:49.824 Power Cycles: 0 00:13:49.824 Power On Hours: 0 hours 00:13:49.824 Unsafe Shutdowns: 0 00:13:49.824 Unrecoverable Media Errors: 0 00:13:49.824 Lifetime Error Log Entries: 0 00:13:49.824 Warning Temperature Time: 0 minutes 00:13:49.824 Critical Temperature Time: 0 minutes 00:13:49.824 00:13:49.824 Number of Queues 00:13:49.824 ================ 00:13:49.824 Number of I/O Submission Queues: 127 00:13:49.824 Number of I/O Completion Queues: 127 00:13:49.824 00:13:49.824 Active Namespaces 00:13:49.824 ================= 00:13:49.824 Namespace ID:1 00:13:49.824 Error Recovery Timeout: Unlimited 00:13:49.824 Command Set Identifier: NVM (00h) 00:13:49.824 Deallocate: Supported 00:13:49.824 Deallocated/Unwritten Error: Not Supported 00:13:49.824 Deallocated Read Value: Unknown 00:13:49.824 Deallocate in Write Zeroes: Not Supported 00:13:49.824 Deallocated Guard Field: 0xFFFF 00:13:49.824 Flush: Supported 00:13:49.824 Reservation: Supported 00:13:49.824 Namespace Sharing Capabilities: Multiple Controllers 00:13:49.824 Size (in LBAs): 131072 (0GiB) 00:13:49.824 Capacity (in LBAs): 131072 (0GiB) 00:13:49.824 Utilization (in LBAs): 131072 (0GiB) 00:13:49.824 NGUID: B53C0C88375D442A89707BE52B0BA7FB 00:13:49.824 UUID: b53c0c88-375d-442a-8970-7be52b0ba7fb 00:13:49.824 Thin Provisioning: Not Supported 00:13:49.824 Per-NS Atomic Units: Yes 00:13:49.824 Atomic Boundary Size (Normal): 0 00:13:49.824 Atomic Boundary Size (PFail): 0 00:13:49.824 Atomic Boundary Offset: 0 00:13:49.824 Maximum Single Source Range Length: 65535 00:13:49.824 Maximum Copy Length: 65535 00:13:49.824 Maximum Source Range Count: 1 00:13:49.824 NGUID/EUI64 Never Reused: No 00:13:49.824 Namespace Write Protected: No 00:13:49.824 Number of LBA Formats: 1 00:13:49.824 Current LBA Format: LBA Format #00 00:13:49.824 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.824 00:13:49.824 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:50.083 [2024-11-20 10:30:50.558368] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.351 Initializing NVMe Controllers 00:13:55.351 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:55.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:55.351 Initialization complete. Launching workers. 00:13:55.351 ======================================================== 00:13:55.351 Latency(us) 00:13:55.351 Device Information : IOPS MiB/s Average min max 00:13:55.351 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.36 156.08 3203.31 953.96 7605.97 00:13:55.351 ======================================================== 00:13:55.351 Total : 39956.36 156.08 3203.31 953.96 7605.97 00:13:55.351 00:13:55.351 [2024-11-20 10:30:55.663204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.351 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:55.351 [2024-11-20 10:30:55.904912] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.617 Initializing NVMe Controllers 00:14:00.617 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:00.617 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:00.617 Initialization complete. Launching workers. 00:14:00.617 ======================================================== 00:14:00.617 Latency(us) 00:14:00.617 Device Information : IOPS MiB/s Average min max 00:14:00.617 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39936.95 156.00 3204.87 965.71 8100.52 00:14:00.617 ======================================================== 00:14:00.617 Total : 39936.95 156.00 3204.87 965.71 8100.52 00:14:00.617 00:14:00.617 [2024-11-20 10:31:00.927791] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.618 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:00.618 [2024-11-20 10:31:01.139257] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:05.885 [2024-11-20 10:31:06.274041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.885 Initializing NVMe Controllers 00:14:05.885 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.885 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:05.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:05.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:05.885 Initialization complete. Launching workers. 00:14:05.885 Starting thread on core 2 00:14:05.885 Starting thread on core 3 00:14:05.885 Starting thread on core 1 00:14:05.885 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:05.885 [2024-11-20 10:31:06.572430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.077 [2024-11-20 10:31:10.484167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.077 Initializing NVMe Controllers 00:14:10.077 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.077 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.077 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:10.077 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:10.077 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:10.077 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:10.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:10.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:10.077 Initialization complete. Launching workers. 00:14:10.077 Starting thread on core 1 with urgent priority queue 00:14:10.077 Starting thread on core 2 with urgent priority queue 00:14:10.077 Starting thread on core 3 with urgent priority queue 00:14:10.077 Starting thread on core 0 with urgent priority queue 00:14:10.077 SPDK bdev Controller (SPDK2 ) core 0: 6520.67 IO/s 15.34 secs/100000 ios 00:14:10.077 SPDK bdev Controller (SPDK2 ) core 1: 5634.67 IO/s 17.75 secs/100000 ios 00:14:10.077 SPDK bdev Controller (SPDK2 ) core 2: 5132.00 IO/s 19.49 secs/100000 ios 00:14:10.077 SPDK bdev Controller (SPDK2 ) core 3: 4840.33 IO/s 20.66 secs/100000 ios 00:14:10.077 ======================================================== 00:14:10.077 00:14:10.077 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:10.077 [2024-11-20 10:31:10.770495] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.077 Initializing NVMe Controllers 00:14:10.077 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.077 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.077 Namespace ID: 1 size: 0GB 00:14:10.077 Initialization complete. 00:14:10.077 INFO: using host memory buffer for IO 00:14:10.077 Hello world! 00:14:10.077 [2024-11-20 10:31:10.780551] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.335 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:10.335 [2024-11-20 10:31:11.063619] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.710 Initializing NVMe Controllers 00:14:11.710 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.710 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.710 Initialization complete. Launching workers. 00:14:11.710 submit (in ns) avg, min, max = 6647.8, 3245.2, 3999104.3 00:14:11.710 complete (in ns) avg, min, max = 20101.2, 1781.7, 7986757.4 00:14:11.710 00:14:11.710 Submit histogram 00:14:11.710 ================ 00:14:11.710 Range in us Cumulative Count 00:14:11.710 3.242 - 3.256: 0.0061% ( 1) 00:14:11.710 3.256 - 3.270: 0.0182% ( 2) 00:14:11.710 3.270 - 3.283: 0.0547% ( 6) 00:14:11.710 3.283 - 3.297: 0.1093% ( 9) 00:14:11.710 3.297 - 3.311: 0.2308% ( 20) 00:14:11.710 3.311 - 3.325: 0.4919% ( 43) 00:14:11.710 3.325 - 3.339: 1.3117% ( 135) 00:14:11.710 3.339 - 3.353: 3.9291% ( 431) 00:14:11.710 3.353 - 3.367: 8.8298% ( 807) 00:14:11.710 3.367 - 3.381: 14.8661% ( 994) 00:14:11.710 3.381 - 3.395: 21.5886% ( 1107) 00:14:11.710 3.395 - 3.409: 28.3051% ( 1106) 00:14:11.710 3.409 - 3.423: 33.6491% ( 880) 00:14:11.710 3.423 - 3.437: 38.7502% ( 840) 00:14:11.710 3.437 - 3.450: 44.5497% ( 955) 00:14:11.710 3.450 - 3.464: 49.3107% ( 784) 00:14:11.710 3.464 - 3.478: 53.2762% ( 653) 00:14:11.710 3.478 - 3.492: 57.3086% ( 664) 00:14:11.710 3.492 - 3.506: 63.6607% ( 1046) 00:14:11.710 3.506 - 3.520: 69.3569% ( 938) 00:14:11.710 3.520 - 3.534: 73.3588% ( 659) 00:14:11.710 3.534 - 3.548: 78.4721% ( 842) 00:14:11.710 3.548 - 3.562: 82.5408% ( 670) 00:14:11.710 3.562 - 3.590: 86.7371% ( 691) 00:14:11.710 3.590 - 3.617: 87.7148% ( 161) 00:14:11.710 3.617 - 3.645: 88.5407% ( 136) 00:14:11.710 3.645 - 3.673: 90.0103% ( 242) 00:14:11.710 3.673 - 3.701: 91.8139% ( 297) 00:14:11.710 3.701 - 3.729: 93.3868% ( 259) 00:14:11.710 3.729 - 3.757: 95.0507% ( 274) 00:14:11.710 3.757 - 3.784: 96.6053% ( 256) 00:14:11.710 3.784 - 3.812: 98.0021% ( 230) 00:14:11.710 3.812 - 3.840: 98.7551% ( 124) 00:14:11.710 3.840 - 3.868: 99.1863% ( 71) 00:14:11.710 3.868 - 3.896: 99.5020% ( 52) 00:14:11.710 3.896 - 3.923: 99.6296% ( 21) 00:14:11.710 3.923 - 3.951: 99.6478% ( 3) 00:14:11.710 3.951 - 3.979: 99.6721% ( 4) 00:14:11.710 4.007 - 4.035: 99.6842% ( 2) 00:14:11.710 4.035 - 4.063: 99.6964% ( 2) 00:14:11.710 4.118 - 4.146: 99.7024% ( 1) 00:14:11.710 5.259 - 5.287: 99.7085% ( 1) 00:14:11.710 5.287 - 5.315: 99.7146% ( 1) 00:14:11.710 5.482 - 5.510: 99.7207% ( 1) 00:14:11.710 5.593 - 5.621: 99.7267% ( 1) 00:14:11.710 5.621 - 5.649: 99.7328% ( 1) 00:14:11.710 5.649 - 5.677: 99.7389% ( 1) 00:14:11.710 5.677 - 5.704: 99.7449% ( 1) 00:14:11.710 5.760 - 5.788: 99.7632% ( 3) 00:14:11.710 5.788 - 5.816: 99.7692% ( 1) 00:14:11.710 5.899 - 5.927: 99.7753% ( 1) 00:14:11.710 5.927 - 5.955: 99.7814% ( 1) 00:14:11.710 5.983 - 6.010: 99.7935% ( 2) 00:14:11.710 6.010 - 6.038: 99.8057% ( 2) 00:14:11.710 6.066 - 6.094: 99.8117% ( 1) 00:14:11.710 6.150 - 6.177: 99.8178% ( 1) 00:14:11.710 6.177 - 6.205: 99.8239% ( 1) 00:14:11.710 6.205 - 6.233: 99.8360% ( 2) 00:14:11.710 6.233 - 6.261: 99.8482% ( 2) 00:14:11.710 6.317 - 6.344: 99.8603% ( 2) 00:14:11.710 6.372 - 6.400: 99.8664% ( 1) 00:14:11.710 6.511 - 6.539: 99.8725% ( 1) 00:14:11.710 6.984 - 7.012: 99.8785% ( 1) 00:14:11.710 7.179 - 7.235: 99.8846% ( 1) 00:14:11.710 7.290 - 7.346: 99.8907% ( 1) 00:14:11.710 7.569 - 7.624: 99.8968% ( 1) 00:14:11.710 7.680 - 7.736: 99.9028% ( 1) 00:14:11.710 8.070 - 8.125: 99.9089% ( 1) 00:14:11.710 8.515 - 8.570: 99.9150% ( 1) 00:14:11.710 8.626 - 8.682: 99.9211% ( 1) 00:14:11.710 3989.148 - 4017.642: 100.0000% ( 13) 00:14:11.710 00:14:11.710 Complete histogram 00:14:11.710 ================== 00:14:11.710 Range in us Cumulative Count 00:14:11.710 1.781 - 1.795: 0.0364% ( 6) 00:14:11.710 1.795 - 1.809: 0.0486% ( 2) 00:14:11.710 1.809 - 1.823: 0.1032% ( 9) 00:14:11.710 1.823 - 1.837: 1.1356% ( 170) 00:14:11.710 1.837 - 1.850: 3.3400% ( 363) 00:14:11.710 1.850 - 1.864: 4.8582% ( 250) 00:14:11.710 1.864 - [2024-11-20 10:31:12.158005] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:11.710 1.878: 9.6860% ( 795) 00:14:11.710 1.878 - 1.892: 53.8592% ( 7274) 00:14:11.710 1.892 - 1.906: 86.9254% ( 5445) 00:14:11.710 1.906 - 1.920: 96.1074% ( 1512) 00:14:11.711 1.920 - 1.934: 98.8037% ( 444) 00:14:11.711 1.934 - 1.948: 99.1680% ( 60) 00:14:11.711 1.948 - 1.962: 99.2652% ( 16) 00:14:11.711 1.962 - 1.976: 99.3016% ( 6) 00:14:11.711 1.976 - 1.990: 99.3077% ( 1) 00:14:11.711 1.990 - 2.003: 99.3138% ( 1) 00:14:11.711 2.003 - 2.017: 99.3259% ( 2) 00:14:11.711 2.031 - 2.045: 99.3320% ( 1) 00:14:11.711 2.045 - 2.059: 99.3441% ( 2) 00:14:11.711 2.059 - 2.073: 99.3684% ( 4) 00:14:11.711 2.073 - 2.087: 99.3745% ( 1) 00:14:11.711 2.101 - 2.115: 99.3806% ( 1) 00:14:11.711 2.157 - 2.170: 99.3988% ( 3) 00:14:11.711 2.268 - 2.282: 99.4049% ( 1) 00:14:11.711 2.310 - 2.323: 99.4109% ( 1) 00:14:11.711 2.365 - 2.379: 99.4231% ( 2) 00:14:11.711 3.590 - 3.617: 99.4292% ( 1) 00:14:11.711 3.617 - 3.645: 99.4352% ( 1) 00:14:11.711 4.007 - 4.035: 99.4413% ( 1) 00:14:11.711 4.063 - 4.090: 99.4474% ( 1) 00:14:11.711 4.090 - 4.118: 99.4535% ( 1) 00:14:11.711 4.118 - 4.146: 99.4595% ( 1) 00:14:11.711 4.202 - 4.230: 99.4656% ( 1) 00:14:11.711 4.341 - 4.369: 99.4717% ( 1) 00:14:11.711 4.591 - 4.619: 99.4777% ( 1) 00:14:11.711 4.730 - 4.758: 99.4838% ( 1) 00:14:11.711 4.758 - 4.786: 99.4899% ( 1) 00:14:11.711 5.816 - 5.843: 99.4960% ( 1) 00:14:11.711 5.843 - 5.871: 99.5081% ( 2) 00:14:11.711 5.871 - 5.899: 99.5142% ( 1) 00:14:11.711 6.010 - 6.038: 99.5203% ( 1) 00:14:11.711 6.790 - 6.817: 99.5263% ( 1) 00:14:11.711 6.845 - 6.873: 99.5324% ( 1) 00:14:11.711 12.355 - 12.410: 99.5385% ( 1) 00:14:11.711 12.967 - 13.023: 99.5445% ( 1) 00:14:11.711 176.306 - 177.197: 99.5506% ( 1) 00:14:11.711 3989.148 - 4017.642: 99.9879% ( 72) 00:14:11.711 4017.642 - 4046.136: 99.9939% ( 1) 00:14:11.711 7978.296 - 8035.283: 100.0000% ( 1) 00:14:11.711 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:11.711 [ 00:14:11.711 { 00:14:11.711 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:11.711 "subtype": "Discovery", 00:14:11.711 "listen_addresses": [], 00:14:11.711 "allow_any_host": true, 00:14:11.711 "hosts": [] 00:14:11.711 }, 00:14:11.711 { 00:14:11.711 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:11.711 "subtype": "NVMe", 00:14:11.711 "listen_addresses": [ 00:14:11.711 { 00:14:11.711 "trtype": "VFIOUSER", 00:14:11.711 "adrfam": "IPv4", 00:14:11.711 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:11.711 "trsvcid": "0" 00:14:11.711 } 00:14:11.711 ], 00:14:11.711 "allow_any_host": true, 00:14:11.711 "hosts": [], 00:14:11.711 "serial_number": "SPDK1", 00:14:11.711 "model_number": "SPDK bdev Controller", 00:14:11.711 "max_namespaces": 32, 00:14:11.711 "min_cntlid": 1, 00:14:11.711 "max_cntlid": 65519, 00:14:11.711 "namespaces": [ 00:14:11.711 { 00:14:11.711 "nsid": 1, 00:14:11.711 "bdev_name": "Malloc1", 00:14:11.711 "name": "Malloc1", 00:14:11.711 "nguid": "F8EDE390B62940A6A74F262C3DB61645", 00:14:11.711 "uuid": "f8ede390-b629-40a6-a74f-262c3db61645" 00:14:11.711 }, 00:14:11.711 { 00:14:11.711 "nsid": 2, 00:14:11.711 "bdev_name": "Malloc3", 00:14:11.711 "name": "Malloc3", 00:14:11.711 "nguid": "B5914A1ADACD4CEAA7248A2CA8697403", 00:14:11.711 "uuid": "b5914a1a-dacd-4cea-a724-8a2ca8697403" 00:14:11.711 } 00:14:11.711 ] 00:14:11.711 }, 00:14:11.711 { 00:14:11.711 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:11.711 "subtype": "NVMe", 00:14:11.711 "listen_addresses": [ 00:14:11.711 { 00:14:11.711 "trtype": "VFIOUSER", 00:14:11.711 "adrfam": "IPv4", 00:14:11.711 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:11.711 "trsvcid": "0" 00:14:11.711 } 00:14:11.711 ], 00:14:11.711 "allow_any_host": true, 00:14:11.711 "hosts": [], 00:14:11.711 "serial_number": "SPDK2", 00:14:11.711 "model_number": "SPDK bdev Controller", 00:14:11.711 "max_namespaces": 32, 00:14:11.711 "min_cntlid": 1, 00:14:11.711 "max_cntlid": 65519, 00:14:11.711 "namespaces": [ 00:14:11.711 { 00:14:11.711 "nsid": 1, 00:14:11.711 "bdev_name": "Malloc2", 00:14:11.711 "name": "Malloc2", 00:14:11.711 "nguid": "B53C0C88375D442A89707BE52B0BA7FB", 00:14:11.711 "uuid": "b53c0c88-375d-442a-8970-7be52b0ba7fb" 00:14:11.711 } 00:14:11.711 ] 00:14:11.711 } 00:14:11.711 ] 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3455292 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:11.711 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:11.970 [2024-11-20 10:31:12.560368] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.970 Malloc4 00:14:11.970 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:12.229 [2024-11-20 10:31:12.795080] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.229 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:12.229 Asynchronous Event Request test 00:14:12.229 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.229 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.229 Registering asynchronous event callbacks... 00:14:12.229 Starting namespace attribute notice tests for all controllers... 00:14:12.229 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:12.229 aer_cb - Changed Namespace 00:14:12.229 Cleaning up... 00:14:12.487 [ 00:14:12.487 { 00:14:12.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:12.487 "subtype": "Discovery", 00:14:12.487 "listen_addresses": [], 00:14:12.487 "allow_any_host": true, 00:14:12.487 "hosts": [] 00:14:12.487 }, 00:14:12.487 { 00:14:12.487 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:12.487 "subtype": "NVMe", 00:14:12.487 "listen_addresses": [ 00:14:12.487 { 00:14:12.487 "trtype": "VFIOUSER", 00:14:12.487 "adrfam": "IPv4", 00:14:12.487 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:12.487 "trsvcid": "0" 00:14:12.487 } 00:14:12.487 ], 00:14:12.487 "allow_any_host": true, 00:14:12.487 "hosts": [], 00:14:12.487 "serial_number": "SPDK1", 00:14:12.487 "model_number": "SPDK bdev Controller", 00:14:12.487 "max_namespaces": 32, 00:14:12.487 "min_cntlid": 1, 00:14:12.487 "max_cntlid": 65519, 00:14:12.487 "namespaces": [ 00:14:12.487 { 00:14:12.487 "nsid": 1, 00:14:12.487 "bdev_name": "Malloc1", 00:14:12.487 "name": "Malloc1", 00:14:12.487 "nguid": "F8EDE390B62940A6A74F262C3DB61645", 00:14:12.487 "uuid": "f8ede390-b629-40a6-a74f-262c3db61645" 00:14:12.487 }, 00:14:12.487 { 00:14:12.487 "nsid": 2, 00:14:12.487 "bdev_name": "Malloc3", 00:14:12.487 "name": "Malloc3", 00:14:12.487 "nguid": "B5914A1ADACD4CEAA7248A2CA8697403", 00:14:12.487 "uuid": "b5914a1a-dacd-4cea-a724-8a2ca8697403" 00:14:12.487 } 00:14:12.487 ] 00:14:12.487 }, 00:14:12.487 { 00:14:12.487 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:12.487 "subtype": "NVMe", 00:14:12.487 "listen_addresses": [ 00:14:12.487 { 00:14:12.487 "trtype": "VFIOUSER", 00:14:12.487 "adrfam": "IPv4", 00:14:12.487 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:12.487 "trsvcid": "0" 00:14:12.487 } 00:14:12.487 ], 00:14:12.487 "allow_any_host": true, 00:14:12.487 "hosts": [], 00:14:12.487 "serial_number": "SPDK2", 00:14:12.487 "model_number": "SPDK bdev Controller", 00:14:12.487 "max_namespaces": 32, 00:14:12.487 "min_cntlid": 1, 00:14:12.487 "max_cntlid": 65519, 00:14:12.487 "namespaces": [ 00:14:12.487 { 00:14:12.487 "nsid": 1, 00:14:12.487 "bdev_name": "Malloc2", 00:14:12.487 "name": "Malloc2", 00:14:12.487 "nguid": "B53C0C88375D442A89707BE52B0BA7FB", 00:14:12.487 "uuid": "b53c0c88-375d-442a-8970-7be52b0ba7fb" 00:14:12.487 }, 00:14:12.487 { 00:14:12.487 "nsid": 2, 00:14:12.487 "bdev_name": "Malloc4", 00:14:12.487 "name": "Malloc4", 00:14:12.487 "nguid": "319FDCA1402949E3A8109424B05DA744", 00:14:12.487 "uuid": "319fdca1-4029-49e3-a810-9424b05da744" 00:14:12.487 } 00:14:12.487 ] 00:14:12.487 } 00:14:12.487 ] 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3455292 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3447458 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3447458 ']' 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3447458 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447458 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447458' 00:14:12.487 killing process with pid 3447458 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3447458 00:14:12.487 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3447458 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3455364 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3455364' 00:14:12.746 Process pid: 3455364 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3455364 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3455364 ']' 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.746 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:12.746 [2024-11-20 10:31:13.337908] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:12.746 [2024-11-20 10:31:13.338825] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:14:12.746 [2024-11-20 10:31:13.338866] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.746 [2024-11-20 10:31:13.413503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.746 [2024-11-20 10:31:13.453352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.746 [2024-11-20 10:31:13.453391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.746 [2024-11-20 10:31:13.453399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.746 [2024-11-20 10:31:13.453405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.746 [2024-11-20 10:31:13.453411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.746 [2024-11-20 10:31:13.455018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.746 [2024-11-20 10:31:13.455124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.746 [2024-11-20 10:31:13.455217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.746 [2024-11-20 10:31:13.455218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.005 [2024-11-20 10:31:13.523983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:13.006 [2024-11-20 10:31:13.524514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:13.006 [2024-11-20 10:31:13.524984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:13.006 [2024-11-20 10:31:13.525409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:13.006 [2024-11-20 10:31:13.525452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:13.006 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.006 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:13.006 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:13.942 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:14.201 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:14.201 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:14.201 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.201 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:14.201 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:14.459 Malloc1 00:14:14.459 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:14.459 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:14.718 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:14.976 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.976 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:14.976 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:15.235 Malloc2 00:14:15.235 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:15.493 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:15.494 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3455364 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3455364 ']' 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3455364 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455364 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455364' 00:14:15.752 killing process with pid 3455364 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3455364 00:14:15.752 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3455364 00:14:16.065 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:16.066 00:14:16.066 real 0m51.636s 00:14:16.066 user 3m19.941s 00:14:16.066 sys 0m3.214s 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:16.066 ************************************ 00:14:16.066 END TEST nvmf_vfio_user 00:14:16.066 ************************************ 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.066 ************************************ 00:14:16.066 START TEST nvmf_vfio_user_nvme_compliance 00:14:16.066 ************************************ 00:14:16.066 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:16.340 * Looking for test storage... 00:14:16.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:16.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.340 --rc genhtml_branch_coverage=1 00:14:16.340 --rc genhtml_function_coverage=1 00:14:16.340 --rc genhtml_legend=1 00:14:16.340 --rc geninfo_all_blocks=1 00:14:16.340 --rc geninfo_unexecuted_blocks=1 00:14:16.340 00:14:16.340 ' 00:14:16.340 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:16.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.340 --rc genhtml_branch_coverage=1 00:14:16.340 --rc genhtml_function_coverage=1 00:14:16.340 --rc genhtml_legend=1 00:14:16.341 --rc geninfo_all_blocks=1 00:14:16.341 --rc geninfo_unexecuted_blocks=1 00:14:16.341 00:14:16.341 ' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:16.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.341 --rc genhtml_branch_coverage=1 00:14:16.341 --rc genhtml_function_coverage=1 00:14:16.341 --rc genhtml_legend=1 00:14:16.341 --rc geninfo_all_blocks=1 00:14:16.341 --rc geninfo_unexecuted_blocks=1 00:14:16.341 00:14:16.341 ' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:16.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.341 --rc genhtml_branch_coverage=1 00:14:16.341 --rc genhtml_function_coverage=1 00:14:16.341 --rc genhtml_legend=1 00:14:16.341 --rc geninfo_all_blocks=1 00:14:16.341 --rc geninfo_unexecuted_blocks=1 00:14:16.341 00:14:16.341 ' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3456078 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3456078' 00:14:16.341 Process pid: 3456078 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3456078 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3456078 ']' 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.341 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.341 [2024-11-20 10:31:16.967272] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:14:16.341 [2024-11-20 10:31:16.967321] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.341 [2024-11-20 10:31:17.029257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.631 [2024-11-20 10:31:17.074102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.631 [2024-11-20 10:31:17.074145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.631 [2024-11-20 10:31:17.074153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.631 [2024-11-20 10:31:17.074159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.631 [2024-11-20 10:31:17.074163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.631 [2024-11-20 10:31:17.076968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.631 [2024-11-20 10:31:17.076998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.631 [2024-11-20 10:31:17.076998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.631 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.631 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:16.631 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.575 malloc0 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.575 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:17.834 00:14:17.834 00:14:17.834 CUnit - A unit testing framework for C - Version 2.1-3 00:14:17.834 http://cunit.sourceforge.net/ 00:14:17.834 00:14:17.834 00:14:17.834 Suite: nvme_compliance 00:14:17.834 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 10:31:18.416543] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.834 [2024-11-20 10:31:18.417881] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:17.834 [2024-11-20 10:31:18.417897] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:17.834 [2024-11-20 10:31:18.417903] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:17.834 [2024-11-20 10:31:18.419567] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.834 passed 00:14:17.834 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 10:31:18.498145] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.834 [2024-11-20 10:31:18.501168] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.834 passed 00:14:18.093 Test: admin_identify_ns ...[2024-11-20 10:31:18.581516] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.093 [2024-11-20 10:31:18.640962] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:18.093 [2024-11-20 10:31:18.648956] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:18.093 [2024-11-20 10:31:18.670057] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.093 passed 00:14:18.093 Test: admin_get_features_mandatory_features ...[2024-11-20 10:31:18.747217] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.093 [2024-11-20 10:31:18.750240] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.093 passed 00:14:18.352 Test: admin_get_features_optional_features ...[2024-11-20 10:31:18.831790] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.352 [2024-11-20 10:31:18.834819] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.352 passed 00:14:18.352 Test: admin_set_features_number_of_queues ...[2024-11-20 10:31:18.912414] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.352 [2024-11-20 10:31:19.021046] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.352 passed 00:14:18.610 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 10:31:19.097200] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.610 [2024-11-20 10:31:19.101221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.610 passed 00:14:18.610 Test: admin_get_log_page_with_lpo ...[2024-11-20 10:31:19.181456] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.610 [2024-11-20 10:31:19.249957] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:18.610 [2024-11-20 10:31:19.263032] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.610 passed 00:14:18.869 Test: fabric_property_get ...[2024-11-20 10:31:19.342142] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.869 [2024-11-20 10:31:19.343388] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:18.869 [2024-11-20 10:31:19.345168] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.869 passed 00:14:18.869 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 10:31:19.424650] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.869 [2024-11-20 10:31:19.425891] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:18.869 [2024-11-20 10:31:19.427671] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.869 passed 00:14:18.869 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 10:31:19.504466] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.869 [2024-11-20 10:31:19.591956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.127 [2024-11-20 10:31:19.614957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.127 [2024-11-20 10:31:19.620041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.127 passed 00:14:19.127 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 10:31:19.696281] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.127 [2024-11-20 10:31:19.697517] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:19.127 [2024-11-20 10:31:19.701307] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.127 passed 00:14:19.127 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 10:31:19.779424] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.127 [2024-11-20 10:31:19.855960] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:19.386 [2024-11-20 10:31:19.879955] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.386 [2024-11-20 10:31:19.885034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.386 passed 00:14:19.386 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 10:31:19.961077] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.386 [2024-11-20 10:31:19.962321] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:19.386 [2024-11-20 10:31:19.962343] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:19.386 [2024-11-20 10:31:19.964102] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.386 passed 00:14:19.386 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 10:31:20.043465] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.644 [2024-11-20 10:31:20.136955] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:19.644 [2024-11-20 10:31:20.144959] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:19.644 [2024-11-20 10:31:20.152952] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:19.644 [2024-11-20 10:31:20.160955] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:19.644 [2024-11-20 10:31:20.190090] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.644 passed 00:14:19.644 Test: admin_create_io_sq_verify_pc ...[2024-11-20 10:31:20.266465] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.644 [2024-11-20 10:31:20.282965] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:19.644 [2024-11-20 10:31:20.300654] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.644 passed 00:14:19.902 Test: admin_create_io_qp_max_qps ...[2024-11-20 10:31:20.378181] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.836 [2024-11-20 10:31:21.474958] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:21.403 [2024-11-20 10:31:21.859840] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.403 passed 00:14:21.403 Test: admin_create_io_sq_shared_cq ...[2024-11-20 10:31:21.942097] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.403 [2024-11-20 10:31:22.074957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:21.403 [2024-11-20 10:31:22.112039] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.662 passed 00:14:21.662 00:14:21.662 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.662 suites 1 1 n/a 0 0 00:14:21.662 tests 18 18 18 0 0 00:14:21.662 asserts 360 360 360 0 n/a 00:14:21.662 00:14:21.662 Elapsed time = 1.523 seconds 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3456078 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3456078 ']' 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3456078 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456078 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456078' 00:14:21.662 killing process with pid 3456078 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3456078 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3456078 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:21.662 00:14:21.662 real 0m5.677s 00:14:21.662 user 0m15.979s 00:14:21.662 sys 0m0.502s 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.662 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.662 ************************************ 00:14:21.662 END TEST nvmf_vfio_user_nvme_compliance 00:14:21.662 ************************************ 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.921 ************************************ 00:14:21.921 START TEST nvmf_vfio_user_fuzz 00:14:21.921 ************************************ 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.921 * Looking for test storage... 00:14:21.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:21.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.921 --rc genhtml_branch_coverage=1 00:14:21.921 --rc genhtml_function_coverage=1 00:14:21.921 --rc genhtml_legend=1 00:14:21.921 --rc geninfo_all_blocks=1 00:14:21.921 --rc geninfo_unexecuted_blocks=1 00:14:21.921 00:14:21.921 ' 00:14:21.921 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:21.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.922 --rc genhtml_branch_coverage=1 00:14:21.922 --rc genhtml_function_coverage=1 00:14:21.922 --rc genhtml_legend=1 00:14:21.922 --rc geninfo_all_blocks=1 00:14:21.922 --rc geninfo_unexecuted_blocks=1 00:14:21.922 00:14:21.922 ' 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:21.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.922 --rc genhtml_branch_coverage=1 00:14:21.922 --rc genhtml_function_coverage=1 00:14:21.922 --rc genhtml_legend=1 00:14:21.922 --rc geninfo_all_blocks=1 00:14:21.922 --rc geninfo_unexecuted_blocks=1 00:14:21.922 00:14:21.922 ' 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:21.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.922 --rc genhtml_branch_coverage=1 00:14:21.922 --rc genhtml_function_coverage=1 00:14:21.922 --rc genhtml_legend=1 00:14:21.922 --rc geninfo_all_blocks=1 00:14:21.922 --rc geninfo_unexecuted_blocks=1 00:14:21.922 00:14:21.922 ' 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.922 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.181 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3457070 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3457070' 00:14:22.182 Process pid: 3457070 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3457070 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3457070 ']' 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.182 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.440 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.440 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:22.440 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.376 malloc0 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:23.376 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:55.484 Fuzzing completed. Shutting down the fuzz application 00:14:55.484 00:14:55.484 Dumping successful admin opcodes: 00:14:55.484 8, 9, 10, 24, 00:14:55.484 Dumping successful io opcodes: 00:14:55.484 0, 00:14:55.484 NS: 0x20000081ef00 I/O qp, Total commands completed: 994895, total successful commands: 3893, random_seed: 748949504 00:14:55.484 NS: 0x20000081ef00 admin qp, Total commands completed: 244698, total successful commands: 1972, random_seed: 1077298176 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3457070 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3457070 ']' 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3457070 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457070 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457070' 00:14:55.484 killing process with pid 3457070 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3457070 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3457070 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:55.484 00:14:55.484 real 0m32.208s 00:14:55.484 user 0m29.665s 00:14:55.484 sys 0m31.735s 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.484 ************************************ 00:14:55.484 END TEST nvmf_vfio_user_fuzz 00:14:55.484 ************************************ 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.484 ************************************ 00:14:55.484 START TEST nvmf_auth_target 00:14:55.484 ************************************ 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:55.484 * Looking for test storage... 00:14:55.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:55.484 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:55.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.485 --rc genhtml_branch_coverage=1 00:14:55.485 --rc genhtml_function_coverage=1 00:14:55.485 --rc genhtml_legend=1 00:14:55.485 --rc geninfo_all_blocks=1 00:14:55.485 --rc geninfo_unexecuted_blocks=1 00:14:55.485 00:14:55.485 ' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:55.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.485 --rc genhtml_branch_coverage=1 00:14:55.485 --rc genhtml_function_coverage=1 00:14:55.485 --rc genhtml_legend=1 00:14:55.485 --rc geninfo_all_blocks=1 00:14:55.485 --rc geninfo_unexecuted_blocks=1 00:14:55.485 00:14:55.485 ' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:55.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.485 --rc genhtml_branch_coverage=1 00:14:55.485 --rc genhtml_function_coverage=1 00:14:55.485 --rc genhtml_legend=1 00:14:55.485 --rc geninfo_all_blocks=1 00:14:55.485 --rc geninfo_unexecuted_blocks=1 00:14:55.485 00:14:55.485 ' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:55.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.485 --rc genhtml_branch_coverage=1 00:14:55.485 --rc genhtml_function_coverage=1 00:14:55.485 --rc genhtml_legend=1 00:14:55.485 --rc geninfo_all_blocks=1 00:14:55.485 --rc geninfo_unexecuted_blocks=1 00:14:55.485 00:14:55.485 ' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:55.485 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:00.764 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:00.764 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.764 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:00.765 Found net devices under 0000:86:00.0: cvl_0_0 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:00.765 Found net devices under 0000:86:00.1: cvl_0_1 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:15:00.765 00:15:00.765 --- 10.0.0.2 ping statistics --- 00:15:00.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.765 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:15:00.765 00:15:00.765 --- 10.0.0.1 ping statistics --- 00:15:00.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.765 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.765 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3465446 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3465446 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3465446 ']' 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.766 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3465617 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3eac50892b13eb178b07f13a57707054688384fd450ca6f3 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bR5 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3eac50892b13eb178b07f13a57707054688384fd450ca6f3 0 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3eac50892b13eb178b07f13a57707054688384fd450ca6f3 0 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3eac50892b13eb178b07f13a57707054688384fd450ca6f3 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bR5 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bR5 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.bR5 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8f5094df0d335edccf1de9fe04aaebe70d855ee6a7179162aef1442eddb5d469 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.du2 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8f5094df0d335edccf1de9fe04aaebe70d855ee6a7179162aef1442eddb5d469 3 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8f5094df0d335edccf1de9fe04aaebe70d855ee6a7179162aef1442eddb5d469 3 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8f5094df0d335edccf1de9fe04aaebe70d855ee6a7179162aef1442eddb5d469 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.du2 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.du2 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.du2 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c53f96b96c8f7941e42efe4c34001ab 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4A2 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c53f96b96c8f7941e42efe4c34001ab 1 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c53f96b96c8f7941e42efe4c34001ab 1 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c53f96b96c8f7941e42efe4c34001ab 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.766 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4A2 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4A2 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.4A2 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=63d172f08abbcf1f5502f5071d6859caef32a6b6740d7f15 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OMt 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 63d172f08abbcf1f5502f5071d6859caef32a6b6740d7f15 2 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 63d172f08abbcf1f5502f5071d6859caef32a6b6740d7f15 2 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=63d172f08abbcf1f5502f5071d6859caef32a6b6740d7f15 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OMt 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OMt 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.OMt 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=befe2476ebd2c10d5b77c5311ebbe11df2fa935ec981f1d0 00:15:00.767 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lKU 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key befe2476ebd2c10d5b77c5311ebbe11df2fa935ec981f1d0 2 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 befe2476ebd2c10d5b77c5311ebbe11df2fa935ec981f1d0 2 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=befe2476ebd2c10d5b77c5311ebbe11df2fa935ec981f1d0 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lKU 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lKU 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lKU 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ae32061cc3e1bef2074cac793291aaa 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9Yj 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ae32061cc3e1bef2074cac793291aaa 1 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ae32061cc3e1bef2074cac793291aaa 1 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ae32061cc3e1bef2074cac793291aaa 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9Yj 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9Yj 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.9Yj 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.027 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a71e2ea97e82ecf04820db4be678e1a05e311a3c7b5a35729e5ac1f0d99e1b64 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.haz 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a71e2ea97e82ecf04820db4be678e1a05e311a3c7b5a35729e5ac1f0d99e1b64 3 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a71e2ea97e82ecf04820db4be678e1a05e311a3c7b5a35729e5ac1f0d99e1b64 3 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a71e2ea97e82ecf04820db4be678e1a05e311a3c7b5a35729e5ac1f0d99e1b64 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.haz 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.haz 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.haz 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3465446 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3465446 ']' 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.028 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3465617 /var/tmp/host.sock 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3465617 ']' 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.286 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bR5 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bR5 00:15:01.545 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bR5 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.du2 ]] 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.du2 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.du2 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.du2 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4A2 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4A2 00:15:01.803 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4A2 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.OMt ]] 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OMt 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OMt 00:15:02.061 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OMt 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lKU 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lKU 00:15:02.319 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lKU 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.9Yj ]] 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Yj 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Yj 00:15:02.577 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Yj 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.haz 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.haz 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.haz 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.836 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.095 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:03.095 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.096 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.355 00:15:03.355 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.355 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.355 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.614 { 00:15:03.614 "cntlid": 1, 00:15:03.614 "qid": 0, 00:15:03.614 "state": "enabled", 00:15:03.614 "thread": "nvmf_tgt_poll_group_000", 00:15:03.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:03.614 "listen_address": { 00:15:03.614 "trtype": "TCP", 00:15:03.614 "adrfam": "IPv4", 00:15:03.614 "traddr": "10.0.0.2", 00:15:03.614 "trsvcid": "4420" 00:15:03.614 }, 00:15:03.614 "peer_address": { 00:15:03.614 "trtype": "TCP", 00:15:03.614 "adrfam": "IPv4", 00:15:03.614 "traddr": "10.0.0.1", 00:15:03.614 "trsvcid": "56324" 00:15:03.614 }, 00:15:03.614 "auth": { 00:15:03.614 "state": "completed", 00:15:03.614 "digest": "sha256", 00:15:03.614 "dhgroup": "null" 00:15:03.614 } 00:15:03.614 } 00:15:03.614 ]' 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.614 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.873 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:03.873 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.442 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.701 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.959 00:15:04.959 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.959 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.959 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.218 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.218 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.218 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.218 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.218 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.218 { 00:15:05.218 "cntlid": 3, 00:15:05.218 "qid": 0, 00:15:05.218 "state": "enabled", 00:15:05.218 "thread": "nvmf_tgt_poll_group_000", 00:15:05.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.218 "listen_address": { 00:15:05.218 "trtype": "TCP", 00:15:05.218 "adrfam": "IPv4", 00:15:05.218 "traddr": "10.0.0.2", 00:15:05.218 "trsvcid": "4420" 00:15:05.218 }, 00:15:05.218 "peer_address": { 00:15:05.219 "trtype": "TCP", 00:15:05.219 "adrfam": "IPv4", 00:15:05.219 "traddr": "10.0.0.1", 00:15:05.219 "trsvcid": "56348" 00:15:05.219 }, 00:15:05.219 "auth": { 00:15:05.219 "state": "completed", 00:15:05.219 "digest": "sha256", 00:15:05.219 "dhgroup": "null" 00:15:05.219 } 00:15:05.219 } 00:15:05.219 ]' 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.219 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.477 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:05.477 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.043 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.302 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.561 00:15:06.561 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.561 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.561 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.819 { 00:15:06.819 "cntlid": 5, 00:15:06.819 "qid": 0, 00:15:06.819 "state": "enabled", 00:15:06.819 "thread": "nvmf_tgt_poll_group_000", 00:15:06.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.819 "listen_address": { 00:15:06.819 "trtype": "TCP", 00:15:06.819 "adrfam": "IPv4", 00:15:06.819 "traddr": "10.0.0.2", 00:15:06.819 "trsvcid": "4420" 00:15:06.819 }, 00:15:06.819 "peer_address": { 00:15:06.819 "trtype": "TCP", 00:15:06.819 "adrfam": "IPv4", 00:15:06.819 "traddr": "10.0.0.1", 00:15:06.819 "trsvcid": "56364" 00:15:06.819 }, 00:15:06.819 "auth": { 00:15:06.819 "state": "completed", 00:15:06.819 "digest": "sha256", 00:15:06.819 "dhgroup": "null" 00:15:06.819 } 00:15:06.819 } 00:15:06.819 ]' 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.819 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.078 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:07.078 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:07.645 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.646 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.904 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.163 00:15:08.163 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.163 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.163 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.422 { 00:15:08.422 "cntlid": 7, 00:15:08.422 "qid": 0, 00:15:08.422 "state": "enabled", 00:15:08.422 "thread": "nvmf_tgt_poll_group_000", 00:15:08.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.422 "listen_address": { 00:15:08.422 "trtype": "TCP", 00:15:08.422 "adrfam": "IPv4", 00:15:08.422 "traddr": "10.0.0.2", 00:15:08.422 "trsvcid": "4420" 00:15:08.422 }, 00:15:08.422 "peer_address": { 00:15:08.422 "trtype": "TCP", 00:15:08.422 "adrfam": "IPv4", 00:15:08.422 "traddr": "10.0.0.1", 00:15:08.422 "trsvcid": "56386" 00:15:08.422 }, 00:15:08.422 "auth": { 00:15:08.422 "state": "completed", 00:15:08.422 "digest": "sha256", 00:15:08.422 "dhgroup": "null" 00:15:08.422 } 00:15:08.422 } 00:15:08.422 ]' 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.422 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.422 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:08.422 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.422 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.422 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.422 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.680 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:08.680 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.248 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.506 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:09.506 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.506 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.506 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.507 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.765 00:15:09.765 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.765 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.765 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.024 { 00:15:10.024 "cntlid": 9, 00:15:10.024 "qid": 0, 00:15:10.024 "state": "enabled", 00:15:10.024 "thread": "nvmf_tgt_poll_group_000", 00:15:10.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.024 "listen_address": { 00:15:10.024 "trtype": "TCP", 00:15:10.024 "adrfam": "IPv4", 00:15:10.024 "traddr": "10.0.0.2", 00:15:10.024 "trsvcid": "4420" 00:15:10.024 }, 00:15:10.024 "peer_address": { 00:15:10.024 "trtype": "TCP", 00:15:10.024 "adrfam": "IPv4", 00:15:10.024 "traddr": "10.0.0.1", 00:15:10.024 "trsvcid": "56422" 00:15:10.024 }, 00:15:10.024 "auth": { 00:15:10.024 "state": "completed", 00:15:10.024 "digest": "sha256", 00:15:10.024 "dhgroup": "ffdhe2048" 00:15:10.024 } 00:15:10.024 } 00:15:10.024 ]' 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.024 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.282 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:10.282 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.849 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.108 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.367 00:15:11.367 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.367 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.367 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.626 { 00:15:11.626 "cntlid": 11, 00:15:11.626 "qid": 0, 00:15:11.626 "state": "enabled", 00:15:11.626 "thread": "nvmf_tgt_poll_group_000", 00:15:11.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.626 "listen_address": { 00:15:11.626 "trtype": "TCP", 00:15:11.626 "adrfam": "IPv4", 00:15:11.626 "traddr": "10.0.0.2", 00:15:11.626 "trsvcid": "4420" 00:15:11.626 }, 00:15:11.626 "peer_address": { 00:15:11.626 "trtype": "TCP", 00:15:11.626 "adrfam": "IPv4", 00:15:11.626 "traddr": "10.0.0.1", 00:15:11.626 "trsvcid": "56444" 00:15:11.626 }, 00:15:11.626 "auth": { 00:15:11.626 "state": "completed", 00:15:11.626 "digest": "sha256", 00:15:11.626 "dhgroup": "ffdhe2048" 00:15:11.626 } 00:15:11.626 } 00:15:11.626 ]' 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.626 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.885 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:11.885 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.453 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.712 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.971 00:15:12.971 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.971 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.971 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.230 { 00:15:13.230 "cntlid": 13, 00:15:13.230 "qid": 0, 00:15:13.230 "state": "enabled", 00:15:13.230 "thread": "nvmf_tgt_poll_group_000", 00:15:13.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.230 "listen_address": { 00:15:13.230 "trtype": "TCP", 00:15:13.230 "adrfam": "IPv4", 00:15:13.230 "traddr": "10.0.0.2", 00:15:13.230 "trsvcid": "4420" 00:15:13.230 }, 00:15:13.230 "peer_address": { 00:15:13.230 "trtype": "TCP", 00:15:13.230 "adrfam": "IPv4", 00:15:13.230 "traddr": "10.0.0.1", 00:15:13.230 "trsvcid": "56482" 00:15:13.230 }, 00:15:13.230 "auth": { 00:15:13.230 "state": "completed", 00:15:13.230 "digest": "sha256", 00:15:13.230 "dhgroup": "ffdhe2048" 00:15:13.230 } 00:15:13.230 } 00:15:13.230 ]' 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.230 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.489 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:13.489 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.057 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.317 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.576 00:15:14.576 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.576 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.576 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.834 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.835 { 00:15:14.835 "cntlid": 15, 00:15:14.835 "qid": 0, 00:15:14.835 "state": "enabled", 00:15:14.835 "thread": "nvmf_tgt_poll_group_000", 00:15:14.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.835 "listen_address": { 00:15:14.835 "trtype": "TCP", 00:15:14.835 "adrfam": "IPv4", 00:15:14.835 "traddr": "10.0.0.2", 00:15:14.835 "trsvcid": "4420" 00:15:14.835 }, 00:15:14.835 "peer_address": { 00:15:14.835 "trtype": "TCP", 00:15:14.835 "adrfam": "IPv4", 00:15:14.835 "traddr": "10.0.0.1", 00:15:14.835 "trsvcid": "45668" 00:15:14.835 }, 00:15:14.835 "auth": { 00:15:14.835 "state": "completed", 00:15:14.835 "digest": "sha256", 00:15:14.835 "dhgroup": "ffdhe2048" 00:15:14.835 } 00:15:14.835 } 00:15:14.835 ]' 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.835 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.093 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:15.093 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.661 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.921 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.180 00:15:16.180 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.180 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.180 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.439 { 00:15:16.439 "cntlid": 17, 00:15:16.439 "qid": 0, 00:15:16.439 "state": "enabled", 00:15:16.439 "thread": "nvmf_tgt_poll_group_000", 00:15:16.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.439 "listen_address": { 00:15:16.439 "trtype": "TCP", 00:15:16.439 "adrfam": "IPv4", 00:15:16.439 "traddr": "10.0.0.2", 00:15:16.439 "trsvcid": "4420" 00:15:16.439 }, 00:15:16.439 "peer_address": { 00:15:16.439 "trtype": "TCP", 00:15:16.439 "adrfam": "IPv4", 00:15:16.439 "traddr": "10.0.0.1", 00:15:16.439 "trsvcid": "45686" 00:15:16.439 }, 00:15:16.439 "auth": { 00:15:16.439 "state": "completed", 00:15:16.439 "digest": "sha256", 00:15:16.439 "dhgroup": "ffdhe3072" 00:15:16.439 } 00:15:16.439 } 00:15:16.439 ]' 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.439 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.698 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:16.698 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:17.266 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.266 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.267 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.267 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.267 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.267 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.267 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.267 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.525 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:17.525 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.525 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.526 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.785 00:15:17.785 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.785 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.785 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.044 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.044 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.044 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.045 { 00:15:18.045 "cntlid": 19, 00:15:18.045 "qid": 0, 00:15:18.045 "state": "enabled", 00:15:18.045 "thread": "nvmf_tgt_poll_group_000", 00:15:18.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.045 "listen_address": { 00:15:18.045 "trtype": "TCP", 00:15:18.045 "adrfam": "IPv4", 00:15:18.045 "traddr": "10.0.0.2", 00:15:18.045 "trsvcid": "4420" 00:15:18.045 }, 00:15:18.045 "peer_address": { 00:15:18.045 "trtype": "TCP", 00:15:18.045 "adrfam": "IPv4", 00:15:18.045 "traddr": "10.0.0.1", 00:15:18.045 "trsvcid": "45708" 00:15:18.045 }, 00:15:18.045 "auth": { 00:15:18.045 "state": "completed", 00:15:18.045 "digest": "sha256", 00:15:18.045 "dhgroup": "ffdhe3072" 00:15:18.045 } 00:15:18.045 } 00:15:18.045 ]' 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.045 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.304 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:18.304 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.871 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.130 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.131 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.389 00:15:19.389 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.389 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.389 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.648 { 00:15:19.648 "cntlid": 21, 00:15:19.648 "qid": 0, 00:15:19.648 "state": "enabled", 00:15:19.648 "thread": "nvmf_tgt_poll_group_000", 00:15:19.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.648 "listen_address": { 00:15:19.648 "trtype": "TCP", 00:15:19.648 "adrfam": "IPv4", 00:15:19.648 "traddr": "10.0.0.2", 00:15:19.648 "trsvcid": "4420" 00:15:19.648 }, 00:15:19.648 "peer_address": { 00:15:19.648 "trtype": "TCP", 00:15:19.648 "adrfam": "IPv4", 00:15:19.648 "traddr": "10.0.0.1", 00:15:19.648 "trsvcid": "45720" 00:15:19.648 }, 00:15:19.648 "auth": { 00:15:19.648 "state": "completed", 00:15:19.648 "digest": "sha256", 00:15:19.648 "dhgroup": "ffdhe3072" 00:15:19.648 } 00:15:19.648 } 00:15:19.648 ]' 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.648 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.907 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:19.907 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.474 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.733 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.992 00:15:20.992 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.992 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.992 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.250 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.251 { 00:15:21.251 "cntlid": 23, 00:15:21.251 "qid": 0, 00:15:21.251 "state": "enabled", 00:15:21.251 "thread": "nvmf_tgt_poll_group_000", 00:15:21.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.251 "listen_address": { 00:15:21.251 "trtype": "TCP", 00:15:21.251 "adrfam": "IPv4", 00:15:21.251 "traddr": "10.0.0.2", 00:15:21.251 "trsvcid": "4420" 00:15:21.251 }, 00:15:21.251 "peer_address": { 00:15:21.251 "trtype": "TCP", 00:15:21.251 "adrfam": "IPv4", 00:15:21.251 "traddr": "10.0.0.1", 00:15:21.251 "trsvcid": "45756" 00:15:21.251 }, 00:15:21.251 "auth": { 00:15:21.251 "state": "completed", 00:15:21.251 "digest": "sha256", 00:15:21.251 "dhgroup": "ffdhe3072" 00:15:21.251 } 00:15:21.251 } 00:15:21.251 ]' 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.251 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.509 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:21.509 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.075 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.334 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.591 00:15:22.591 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.591 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.591 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.849 { 00:15:22.849 "cntlid": 25, 00:15:22.849 "qid": 0, 00:15:22.849 "state": "enabled", 00:15:22.849 "thread": "nvmf_tgt_poll_group_000", 00:15:22.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.849 "listen_address": { 00:15:22.849 "trtype": "TCP", 00:15:22.849 "adrfam": "IPv4", 00:15:22.849 "traddr": "10.0.0.2", 00:15:22.849 "trsvcid": "4420" 00:15:22.849 }, 00:15:22.849 "peer_address": { 00:15:22.849 "trtype": "TCP", 00:15:22.849 "adrfam": "IPv4", 00:15:22.849 "traddr": "10.0.0.1", 00:15:22.849 "trsvcid": "45776" 00:15:22.849 }, 00:15:22.849 "auth": { 00:15:22.849 "state": "completed", 00:15:22.849 "digest": "sha256", 00:15:22.849 "dhgroup": "ffdhe4096" 00:15:22.849 } 00:15:22.849 } 00:15:22.849 ]' 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.849 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.107 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.107 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.107 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.107 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:23.107 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.739 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.997 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.256 00:15:24.256 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.256 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.256 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.515 { 00:15:24.515 "cntlid": 27, 00:15:24.515 "qid": 0, 00:15:24.515 "state": "enabled", 00:15:24.515 "thread": "nvmf_tgt_poll_group_000", 00:15:24.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.515 "listen_address": { 00:15:24.515 "trtype": "TCP", 00:15:24.515 "adrfam": "IPv4", 00:15:24.515 "traddr": "10.0.0.2", 00:15:24.515 "trsvcid": "4420" 00:15:24.515 }, 00:15:24.515 "peer_address": { 00:15:24.515 "trtype": "TCP", 00:15:24.515 "adrfam": "IPv4", 00:15:24.515 "traddr": "10.0.0.1", 00:15:24.515 "trsvcid": "41700" 00:15:24.515 }, 00:15:24.515 "auth": { 00:15:24.515 "state": "completed", 00:15:24.515 "digest": "sha256", 00:15:24.515 "dhgroup": "ffdhe4096" 00:15:24.515 } 00:15:24.515 } 00:15:24.515 ]' 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.515 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.774 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:24.774 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.342 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.601 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.860 00:15:25.860 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.860 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.860 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.118 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.118 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.118 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.118 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.118 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.118 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.118 { 00:15:26.118 "cntlid": 29, 00:15:26.118 "qid": 0, 00:15:26.119 "state": "enabled", 00:15:26.119 "thread": "nvmf_tgt_poll_group_000", 00:15:26.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.119 "listen_address": { 00:15:26.119 "trtype": "TCP", 00:15:26.119 "adrfam": "IPv4", 00:15:26.119 "traddr": "10.0.0.2", 00:15:26.119 "trsvcid": "4420" 00:15:26.119 }, 00:15:26.119 "peer_address": { 00:15:26.119 "trtype": "TCP", 00:15:26.119 "adrfam": "IPv4", 00:15:26.119 "traddr": "10.0.0.1", 00:15:26.119 "trsvcid": "41716" 00:15:26.119 }, 00:15:26.119 "auth": { 00:15:26.119 "state": "completed", 00:15:26.119 "digest": "sha256", 00:15:26.119 "dhgroup": "ffdhe4096" 00:15:26.119 } 00:15:26.119 } 00:15:26.119 ]' 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.119 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.378 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:26.378 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.949 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.207 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.466 00:15:27.466 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.466 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.466 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.725 { 00:15:27.725 "cntlid": 31, 00:15:27.725 "qid": 0, 00:15:27.725 "state": "enabled", 00:15:27.725 "thread": "nvmf_tgt_poll_group_000", 00:15:27.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.725 "listen_address": { 00:15:27.725 "trtype": "TCP", 00:15:27.725 "adrfam": "IPv4", 00:15:27.725 "traddr": "10.0.0.2", 00:15:27.725 "trsvcid": "4420" 00:15:27.725 }, 00:15:27.725 "peer_address": { 00:15:27.725 "trtype": "TCP", 00:15:27.725 "adrfam": "IPv4", 00:15:27.725 "traddr": "10.0.0.1", 00:15:27.725 "trsvcid": "41744" 00:15:27.725 }, 00:15:27.725 "auth": { 00:15:27.725 "state": "completed", 00:15:27.725 "digest": "sha256", 00:15:27.725 "dhgroup": "ffdhe4096" 00:15:27.725 } 00:15:27.725 } 00:15:27.725 ]' 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.725 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.983 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:27.983 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.550 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.808 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.809 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.067 00:15:29.067 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.067 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.067 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.325 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.325 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.325 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.325 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.325 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.325 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.325 { 00:15:29.325 "cntlid": 33, 00:15:29.325 "qid": 0, 00:15:29.325 "state": "enabled", 00:15:29.325 "thread": "nvmf_tgt_poll_group_000", 00:15:29.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.325 "listen_address": { 00:15:29.325 "trtype": "TCP", 00:15:29.325 "adrfam": "IPv4", 00:15:29.325 "traddr": "10.0.0.2", 00:15:29.325 "trsvcid": "4420" 00:15:29.325 }, 00:15:29.325 "peer_address": { 00:15:29.325 "trtype": "TCP", 00:15:29.325 "adrfam": "IPv4", 00:15:29.325 "traddr": "10.0.0.1", 00:15:29.325 "trsvcid": "41778" 00:15:29.325 }, 00:15:29.325 "auth": { 00:15:29.325 "state": "completed", 00:15:29.325 "digest": "sha256", 00:15:29.325 "dhgroup": "ffdhe6144" 00:15:29.325 } 00:15:29.325 } 00:15:29.325 ]' 00:15:29.326 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.326 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.326 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.584 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.584 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.584 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.584 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.584 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.843 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:29.843 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.412 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.412 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.979 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.979 { 00:15:30.979 "cntlid": 35, 00:15:30.979 "qid": 0, 00:15:30.979 "state": "enabled", 00:15:30.979 "thread": "nvmf_tgt_poll_group_000", 00:15:30.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.979 "listen_address": { 00:15:30.979 "trtype": "TCP", 00:15:30.979 "adrfam": "IPv4", 00:15:30.979 "traddr": "10.0.0.2", 00:15:30.979 "trsvcid": "4420" 00:15:30.979 }, 00:15:30.979 "peer_address": { 00:15:30.979 "trtype": "TCP", 00:15:30.979 "adrfam": "IPv4", 00:15:30.979 "traddr": "10.0.0.1", 00:15:30.979 "trsvcid": "41806" 00:15:30.979 }, 00:15:30.979 "auth": { 00:15:30.979 "state": "completed", 00:15:30.979 "digest": "sha256", 00:15:30.979 "dhgroup": "ffdhe6144" 00:15:30.979 } 00:15:30.979 } 00:15:30.979 ]' 00:15:30.979 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.237 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.237 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.237 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.238 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.238 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.238 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.238 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.496 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:31.496 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.063 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.064 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.064 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.631 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.631 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.890 { 00:15:32.890 "cntlid": 37, 00:15:32.890 "qid": 0, 00:15:32.890 "state": "enabled", 00:15:32.890 "thread": "nvmf_tgt_poll_group_000", 00:15:32.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.890 "listen_address": { 00:15:32.890 "trtype": "TCP", 00:15:32.890 "adrfam": "IPv4", 00:15:32.890 "traddr": "10.0.0.2", 00:15:32.890 "trsvcid": "4420" 00:15:32.890 }, 00:15:32.890 "peer_address": { 00:15:32.890 "trtype": "TCP", 00:15:32.890 "adrfam": "IPv4", 00:15:32.890 "traddr": "10.0.0.1", 00:15:32.890 "trsvcid": "41840" 00:15:32.890 }, 00:15:32.890 "auth": { 00:15:32.890 "state": "completed", 00:15:32.890 "digest": "sha256", 00:15:32.890 "dhgroup": "ffdhe6144" 00:15:32.890 } 00:15:32.890 } 00:15:32.890 ]' 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.890 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.149 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:33.149 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.716 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.975 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.234 00:15:34.234 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.234 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.235 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.494 { 00:15:34.494 "cntlid": 39, 00:15:34.494 "qid": 0, 00:15:34.494 "state": "enabled", 00:15:34.494 "thread": "nvmf_tgt_poll_group_000", 00:15:34.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.494 "listen_address": { 00:15:34.494 "trtype": "TCP", 00:15:34.494 "adrfam": "IPv4", 00:15:34.494 "traddr": "10.0.0.2", 00:15:34.494 "trsvcid": "4420" 00:15:34.494 }, 00:15:34.494 "peer_address": { 00:15:34.494 "trtype": "TCP", 00:15:34.494 "adrfam": "IPv4", 00:15:34.494 "traddr": "10.0.0.1", 00:15:34.494 "trsvcid": "35418" 00:15:34.494 }, 00:15:34.494 "auth": { 00:15:34.494 "state": "completed", 00:15:34.494 "digest": "sha256", 00:15:34.494 "dhgroup": "ffdhe6144" 00:15:34.494 } 00:15:34.494 } 00:15:34.494 ]' 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.494 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.753 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:34.753 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.319 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.577 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.143 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.143 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.144 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.144 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.144 { 00:15:36.144 "cntlid": 41, 00:15:36.144 "qid": 0, 00:15:36.144 "state": "enabled", 00:15:36.144 "thread": "nvmf_tgt_poll_group_000", 00:15:36.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.144 "listen_address": { 00:15:36.144 "trtype": "TCP", 00:15:36.144 "adrfam": "IPv4", 00:15:36.144 "traddr": "10.0.0.2", 00:15:36.144 "trsvcid": "4420" 00:15:36.144 }, 00:15:36.144 "peer_address": { 00:15:36.144 "trtype": "TCP", 00:15:36.144 "adrfam": "IPv4", 00:15:36.144 "traddr": "10.0.0.1", 00:15:36.144 "trsvcid": "35430" 00:15:36.144 }, 00:15:36.144 "auth": { 00:15:36.144 "state": "completed", 00:15:36.144 "digest": "sha256", 00:15:36.144 "dhgroup": "ffdhe8192" 00:15:36.144 } 00:15:36.144 } 00:15:36.144 ]' 00:15:36.144 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.402 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.660 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:36.660 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:37.227 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.486 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.744 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.004 { 00:15:38.004 "cntlid": 43, 00:15:38.004 "qid": 0, 00:15:38.004 "state": "enabled", 00:15:38.004 "thread": "nvmf_tgt_poll_group_000", 00:15:38.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.004 "listen_address": { 00:15:38.004 "trtype": "TCP", 00:15:38.004 "adrfam": "IPv4", 00:15:38.004 "traddr": "10.0.0.2", 00:15:38.004 "trsvcid": "4420" 00:15:38.004 }, 00:15:38.004 "peer_address": { 00:15:38.004 "trtype": "TCP", 00:15:38.004 "adrfam": "IPv4", 00:15:38.004 "traddr": "10.0.0.1", 00:15:38.004 "trsvcid": "35454" 00:15:38.004 }, 00:15:38.004 "auth": { 00:15:38.004 "state": "completed", 00:15:38.004 "digest": "sha256", 00:15:38.004 "dhgroup": "ffdhe8192" 00:15:38.004 } 00:15:38.004 } 00:15:38.004 ]' 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.004 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.263 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.263 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.263 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.263 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.263 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.522 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:38.522 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.089 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.348 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.606 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.864 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.864 { 00:15:39.864 "cntlid": 45, 00:15:39.864 "qid": 0, 00:15:39.864 "state": "enabled", 00:15:39.864 "thread": "nvmf_tgt_poll_group_000", 00:15:39.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.864 "listen_address": { 00:15:39.864 "trtype": "TCP", 00:15:39.864 "adrfam": "IPv4", 00:15:39.864 "traddr": "10.0.0.2", 00:15:39.864 "trsvcid": "4420" 00:15:39.864 }, 00:15:39.864 "peer_address": { 00:15:39.864 "trtype": "TCP", 00:15:39.864 "adrfam": "IPv4", 00:15:39.864 "traddr": "10.0.0.1", 00:15:39.864 "trsvcid": "35470" 00:15:39.864 }, 00:15:39.864 "auth": { 00:15:39.864 "state": "completed", 00:15:39.864 "digest": "sha256", 00:15:39.864 "dhgroup": "ffdhe8192" 00:15:39.864 } 00:15:39.864 } 00:15:39.864 ]' 00:15:39.865 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.865 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.865 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.123 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.123 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.123 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.123 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.123 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.381 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:40.381 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.949 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.520 00:15:41.520 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.520 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.520 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.779 { 00:15:41.779 "cntlid": 47, 00:15:41.779 "qid": 0, 00:15:41.779 "state": "enabled", 00:15:41.779 "thread": "nvmf_tgt_poll_group_000", 00:15:41.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.779 "listen_address": { 00:15:41.779 "trtype": "TCP", 00:15:41.779 "adrfam": "IPv4", 00:15:41.779 "traddr": "10.0.0.2", 00:15:41.779 "trsvcid": "4420" 00:15:41.779 }, 00:15:41.779 "peer_address": { 00:15:41.779 "trtype": "TCP", 00:15:41.779 "adrfam": "IPv4", 00:15:41.779 "traddr": "10.0.0.1", 00:15:41.779 "trsvcid": "35494" 00:15:41.779 }, 00:15:41.779 "auth": { 00:15:41.779 "state": "completed", 00:15:41.779 "digest": "sha256", 00:15:41.779 "dhgroup": "ffdhe8192" 00:15:41.779 } 00:15:41.779 } 00:15:41.779 ]' 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.779 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.780 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.780 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.780 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.780 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.780 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.038 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:42.038 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.606 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.865 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.124 00:15:43.124 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.124 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.124 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.381 { 00:15:43.381 "cntlid": 49, 00:15:43.381 "qid": 0, 00:15:43.381 "state": "enabled", 00:15:43.381 "thread": "nvmf_tgt_poll_group_000", 00:15:43.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.381 "listen_address": { 00:15:43.381 "trtype": "TCP", 00:15:43.381 "adrfam": "IPv4", 00:15:43.381 "traddr": "10.0.0.2", 00:15:43.381 "trsvcid": "4420" 00:15:43.381 }, 00:15:43.381 "peer_address": { 00:15:43.381 "trtype": "TCP", 00:15:43.381 "adrfam": "IPv4", 00:15:43.381 "traddr": "10.0.0.1", 00:15:43.381 "trsvcid": "35512" 00:15:43.381 }, 00:15:43.381 "auth": { 00:15:43.381 "state": "completed", 00:15:43.381 "digest": "sha384", 00:15:43.381 "dhgroup": "null" 00:15:43.381 } 00:15:43.381 } 00:15:43.381 ]' 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.381 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.382 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.382 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.382 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.382 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.639 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:43.639 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.206 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.464 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.722 00:15:44.722 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.722 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.722 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.980 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.980 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.980 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.980 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.981 { 00:15:44.981 "cntlid": 51, 00:15:44.981 "qid": 0, 00:15:44.981 "state": "enabled", 00:15:44.981 "thread": "nvmf_tgt_poll_group_000", 00:15:44.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.981 "listen_address": { 00:15:44.981 "trtype": "TCP", 00:15:44.981 "adrfam": "IPv4", 00:15:44.981 "traddr": "10.0.0.2", 00:15:44.981 "trsvcid": "4420" 00:15:44.981 }, 00:15:44.981 "peer_address": { 00:15:44.981 "trtype": "TCP", 00:15:44.981 "adrfam": "IPv4", 00:15:44.981 "traddr": "10.0.0.1", 00:15:44.981 "trsvcid": "47404" 00:15:44.981 }, 00:15:44.981 "auth": { 00:15:44.981 "state": "completed", 00:15:44.981 "digest": "sha384", 00:15:44.981 "dhgroup": "null" 00:15:44.981 } 00:15:44.981 } 00:15:44.981 ]' 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.981 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.239 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:45.239 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.805 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.063 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.064 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.330 00:15:46.330 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.330 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.330 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.588 { 00:15:46.588 "cntlid": 53, 00:15:46.588 "qid": 0, 00:15:46.588 "state": "enabled", 00:15:46.588 "thread": "nvmf_tgt_poll_group_000", 00:15:46.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.588 "listen_address": { 00:15:46.588 "trtype": "TCP", 00:15:46.588 "adrfam": "IPv4", 00:15:46.588 "traddr": "10.0.0.2", 00:15:46.588 "trsvcid": "4420" 00:15:46.588 }, 00:15:46.588 "peer_address": { 00:15:46.588 "trtype": "TCP", 00:15:46.588 "adrfam": "IPv4", 00:15:46.588 "traddr": "10.0.0.1", 00:15:46.588 "trsvcid": "47446" 00:15:46.588 }, 00:15:46.588 "auth": { 00:15:46.588 "state": "completed", 00:15:46.588 "digest": "sha384", 00:15:46.588 "dhgroup": "null" 00:15:46.588 } 00:15:46.588 } 00:15:46.588 ]' 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.588 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.847 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:46.847 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.414 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.673 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.934 00:15:47.934 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.934 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.934 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.196 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.196 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.196 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.196 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.196 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.196 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.196 { 00:15:48.196 "cntlid": 55, 00:15:48.196 "qid": 0, 00:15:48.196 "state": "enabled", 00:15:48.196 "thread": "nvmf_tgt_poll_group_000", 00:15:48.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.196 "listen_address": { 00:15:48.196 "trtype": "TCP", 00:15:48.196 "adrfam": "IPv4", 00:15:48.197 "traddr": "10.0.0.2", 00:15:48.197 "trsvcid": "4420" 00:15:48.197 }, 00:15:48.197 "peer_address": { 00:15:48.197 "trtype": "TCP", 00:15:48.197 "adrfam": "IPv4", 00:15:48.197 "traddr": "10.0.0.1", 00:15:48.197 "trsvcid": "47460" 00:15:48.197 }, 00:15:48.197 "auth": { 00:15:48.197 "state": "completed", 00:15:48.197 "digest": "sha384", 00:15:48.197 "dhgroup": "null" 00:15:48.197 } 00:15:48.197 } 00:15:48.197 ]' 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.197 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.455 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:48.455 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.020 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.280 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.539 00:15:49.539 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.539 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.539 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.798 { 00:15:49.798 "cntlid": 57, 00:15:49.798 "qid": 0, 00:15:49.798 "state": "enabled", 00:15:49.798 "thread": "nvmf_tgt_poll_group_000", 00:15:49.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.798 "listen_address": { 00:15:49.798 "trtype": "TCP", 00:15:49.798 "adrfam": "IPv4", 00:15:49.798 "traddr": "10.0.0.2", 00:15:49.798 "trsvcid": "4420" 00:15:49.798 }, 00:15:49.798 "peer_address": { 00:15:49.798 "trtype": "TCP", 00:15:49.798 "adrfam": "IPv4", 00:15:49.798 "traddr": "10.0.0.1", 00:15:49.798 "trsvcid": "47490" 00:15:49.798 }, 00:15:49.798 "auth": { 00:15:49.798 "state": "completed", 00:15:49.798 "digest": "sha384", 00:15:49.798 "dhgroup": "ffdhe2048" 00:15:49.798 } 00:15:49.798 } 00:15:49.798 ]' 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.798 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.058 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:50.058 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:50.624 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.624 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.625 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.625 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.625 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.625 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.625 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.625 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.883 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.142 00:15:51.142 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.142 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.142 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.401 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.401 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.402 { 00:15:51.402 "cntlid": 59, 00:15:51.402 "qid": 0, 00:15:51.402 "state": "enabled", 00:15:51.402 "thread": "nvmf_tgt_poll_group_000", 00:15:51.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.402 "listen_address": { 00:15:51.402 "trtype": "TCP", 00:15:51.402 "adrfam": "IPv4", 00:15:51.402 "traddr": "10.0.0.2", 00:15:51.402 "trsvcid": "4420" 00:15:51.402 }, 00:15:51.402 "peer_address": { 00:15:51.402 "trtype": "TCP", 00:15:51.402 "adrfam": "IPv4", 00:15:51.402 "traddr": "10.0.0.1", 00:15:51.402 "trsvcid": "47508" 00:15:51.402 }, 00:15:51.402 "auth": { 00:15:51.402 "state": "completed", 00:15:51.402 "digest": "sha384", 00:15:51.402 "dhgroup": "ffdhe2048" 00:15:51.402 } 00:15:51.402 } 00:15:51.402 ]' 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.402 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.402 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.402 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.402 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.402 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.402 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.661 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:51.661 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.229 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.488 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.746 00:15:52.746 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.746 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.746 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.004 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.004 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.004 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.004 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.004 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.004 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.004 { 00:15:53.004 "cntlid": 61, 00:15:53.004 "qid": 0, 00:15:53.004 "state": "enabled", 00:15:53.004 "thread": "nvmf_tgt_poll_group_000", 00:15:53.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.004 "listen_address": { 00:15:53.004 "trtype": "TCP", 00:15:53.004 "adrfam": "IPv4", 00:15:53.004 "traddr": "10.0.0.2", 00:15:53.004 "trsvcid": "4420" 00:15:53.004 }, 00:15:53.004 "peer_address": { 00:15:53.004 "trtype": "TCP", 00:15:53.004 "adrfam": "IPv4", 00:15:53.004 "traddr": "10.0.0.1", 00:15:53.004 "trsvcid": "47536" 00:15:53.004 }, 00:15:53.004 "auth": { 00:15:53.004 "state": "completed", 00:15:53.004 "digest": "sha384", 00:15:53.004 "dhgroup": "ffdhe2048" 00:15:53.004 } 00:15:53.004 } 00:15:53.004 ]' 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.005 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.264 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:53.264 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.831 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.090 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.348 00:15:54.348 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.348 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.348 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.607 { 00:15:54.607 "cntlid": 63, 00:15:54.607 "qid": 0, 00:15:54.607 "state": "enabled", 00:15:54.607 "thread": "nvmf_tgt_poll_group_000", 00:15:54.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.607 "listen_address": { 00:15:54.607 "trtype": "TCP", 00:15:54.607 "adrfam": "IPv4", 00:15:54.607 "traddr": "10.0.0.2", 00:15:54.607 "trsvcid": "4420" 00:15:54.607 }, 00:15:54.607 "peer_address": { 00:15:54.607 "trtype": "TCP", 00:15:54.607 "adrfam": "IPv4", 00:15:54.607 "traddr": "10.0.0.1", 00:15:54.607 "trsvcid": "57104" 00:15:54.607 }, 00:15:54.607 "auth": { 00:15:54.607 "state": "completed", 00:15:54.607 "digest": "sha384", 00:15:54.607 "dhgroup": "ffdhe2048" 00:15:54.607 } 00:15:54.607 } 00:15:54.607 ]' 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.607 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.866 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:54.866 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.433 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.692 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.950 00:15:55.950 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.950 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.951 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.951 { 00:15:55.951 "cntlid": 65, 00:15:55.951 "qid": 0, 00:15:55.951 "state": "enabled", 00:15:55.951 "thread": "nvmf_tgt_poll_group_000", 00:15:55.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.951 "listen_address": { 00:15:55.951 "trtype": "TCP", 00:15:55.951 "adrfam": "IPv4", 00:15:55.951 "traddr": "10.0.0.2", 00:15:55.951 "trsvcid": "4420" 00:15:55.951 }, 00:15:55.951 "peer_address": { 00:15:55.951 "trtype": "TCP", 00:15:55.951 "adrfam": "IPv4", 00:15:55.951 "traddr": "10.0.0.1", 00:15:55.951 "trsvcid": "57142" 00:15:55.951 }, 00:15:55.951 "auth": { 00:15:55.951 "state": "completed", 00:15:55.951 "digest": "sha384", 00:15:55.951 "dhgroup": "ffdhe3072" 00:15:55.951 } 00:15:55.951 } 00:15:55.951 ]' 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.209 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.468 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:56.468 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.035 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.293 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.552 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.552 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.810 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.810 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.810 { 00:15:57.810 "cntlid": 67, 00:15:57.810 "qid": 0, 00:15:57.810 "state": "enabled", 00:15:57.810 "thread": "nvmf_tgt_poll_group_000", 00:15:57.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.811 "listen_address": { 00:15:57.811 "trtype": "TCP", 00:15:57.811 "adrfam": "IPv4", 00:15:57.811 "traddr": "10.0.0.2", 00:15:57.811 "trsvcid": "4420" 00:15:57.811 }, 00:15:57.811 "peer_address": { 00:15:57.811 "trtype": "TCP", 00:15:57.811 "adrfam": "IPv4", 00:15:57.811 "traddr": "10.0.0.1", 00:15:57.811 "trsvcid": "57168" 00:15:57.811 }, 00:15:57.811 "auth": { 00:15:57.811 "state": "completed", 00:15:57.811 "digest": "sha384", 00:15:57.811 "dhgroup": "ffdhe3072" 00:15:57.811 } 00:15:57.811 } 00:15:57.811 ]' 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.811 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.069 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:58.069 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.636 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.893 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.893 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.893 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.894 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.894 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.152 { 00:15:59.152 "cntlid": 69, 00:15:59.152 "qid": 0, 00:15:59.152 "state": "enabled", 00:15:59.152 "thread": "nvmf_tgt_poll_group_000", 00:15:59.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.152 "listen_address": { 00:15:59.152 "trtype": "TCP", 00:15:59.152 "adrfam": "IPv4", 00:15:59.152 "traddr": "10.0.0.2", 00:15:59.152 "trsvcid": "4420" 00:15:59.152 }, 00:15:59.152 "peer_address": { 00:15:59.152 "trtype": "TCP", 00:15:59.152 "adrfam": "IPv4", 00:15:59.152 "traddr": "10.0.0.1", 00:15:59.152 "trsvcid": "57200" 00:15:59.152 }, 00:15:59.152 "auth": { 00:15:59.152 "state": "completed", 00:15:59.152 "digest": "sha384", 00:15:59.152 "dhgroup": "ffdhe3072" 00:15:59.152 } 00:15:59.152 } 00:15:59.152 ]' 00:15:59.152 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.410 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.671 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:15:59.671 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.238 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.496 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.754 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.754 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.012 { 00:16:01.012 "cntlid": 71, 00:16:01.012 "qid": 0, 00:16:01.012 "state": "enabled", 00:16:01.012 "thread": "nvmf_tgt_poll_group_000", 00:16:01.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.012 "listen_address": { 00:16:01.012 "trtype": "TCP", 00:16:01.012 "adrfam": "IPv4", 00:16:01.012 "traddr": "10.0.0.2", 00:16:01.012 "trsvcid": "4420" 00:16:01.012 }, 00:16:01.012 "peer_address": { 00:16:01.012 "trtype": "TCP", 00:16:01.012 "adrfam": "IPv4", 00:16:01.012 "traddr": "10.0.0.1", 00:16:01.012 "trsvcid": "57224" 00:16:01.012 }, 00:16:01.012 "auth": { 00:16:01.012 "state": "completed", 00:16:01.012 "digest": "sha384", 00:16:01.012 "dhgroup": "ffdhe3072" 00:16:01.012 } 00:16:01.012 } 00:16:01.012 ]' 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.012 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.337 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:01.337 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:01.680 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.958 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.959 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.959 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.216 00:16:02.216 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.216 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.216 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.475 { 00:16:02.475 "cntlid": 73, 00:16:02.475 "qid": 0, 00:16:02.475 "state": "enabled", 00:16:02.475 "thread": "nvmf_tgt_poll_group_000", 00:16:02.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.475 "listen_address": { 00:16:02.475 "trtype": "TCP", 00:16:02.475 "adrfam": "IPv4", 00:16:02.475 "traddr": "10.0.0.2", 00:16:02.475 "trsvcid": "4420" 00:16:02.475 }, 00:16:02.475 "peer_address": { 00:16:02.475 "trtype": "TCP", 00:16:02.475 "adrfam": "IPv4", 00:16:02.475 "traddr": "10.0.0.1", 00:16:02.475 "trsvcid": "57258" 00:16:02.475 }, 00:16:02.475 "auth": { 00:16:02.475 "state": "completed", 00:16:02.475 "digest": "sha384", 00:16:02.475 "dhgroup": "ffdhe4096" 00:16:02.475 } 00:16:02.475 } 00:16:02.475 ]' 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.475 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:02.734 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:03.301 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.301 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.301 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.301 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.560 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.560 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.560 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.561 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.819 00:16:03.819 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.819 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.819 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.077 { 00:16:04.077 "cntlid": 75, 00:16:04.077 "qid": 0, 00:16:04.077 "state": "enabled", 00:16:04.077 "thread": "nvmf_tgt_poll_group_000", 00:16:04.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.077 "listen_address": { 00:16:04.077 "trtype": "TCP", 00:16:04.077 "adrfam": "IPv4", 00:16:04.077 "traddr": "10.0.0.2", 00:16:04.077 "trsvcid": "4420" 00:16:04.077 }, 00:16:04.077 "peer_address": { 00:16:04.077 "trtype": "TCP", 00:16:04.077 "adrfam": "IPv4", 00:16:04.077 "traddr": "10.0.0.1", 00:16:04.077 "trsvcid": "39796" 00:16:04.077 }, 00:16:04.077 "auth": { 00:16:04.077 "state": "completed", 00:16:04.077 "digest": "sha384", 00:16:04.077 "dhgroup": "ffdhe4096" 00:16:04.077 } 00:16:04.077 } 00:16:04.077 ]' 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.077 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.078 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.078 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.336 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.336 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.336 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.595 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:04.595 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.163 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.422 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.710 { 00:16:05.710 "cntlid": 77, 00:16:05.710 "qid": 0, 00:16:05.710 "state": "enabled", 00:16:05.710 "thread": "nvmf_tgt_poll_group_000", 00:16:05.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.710 "listen_address": { 00:16:05.710 "trtype": "TCP", 00:16:05.710 "adrfam": "IPv4", 00:16:05.710 "traddr": "10.0.0.2", 00:16:05.710 "trsvcid": "4420" 00:16:05.710 }, 00:16:05.710 "peer_address": { 00:16:05.710 "trtype": "TCP", 00:16:05.710 "adrfam": "IPv4", 00:16:05.710 "traddr": "10.0.0.1", 00:16:05.710 "trsvcid": "39842" 00:16:05.710 }, 00:16:05.710 "auth": { 00:16:05.710 "state": "completed", 00:16:05.710 "digest": "sha384", 00:16:05.710 "dhgroup": "ffdhe4096" 00:16:05.710 } 00:16:05.710 } 00:16:05.710 ]' 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.710 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.969 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.969 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.969 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.969 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.969 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.228 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:06.228 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.795 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.053 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.312 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.312 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.312 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.312 { 00:16:07.312 "cntlid": 79, 00:16:07.312 "qid": 0, 00:16:07.312 "state": "enabled", 00:16:07.312 "thread": "nvmf_tgt_poll_group_000", 00:16:07.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.312 "listen_address": { 00:16:07.312 "trtype": "TCP", 00:16:07.312 "adrfam": "IPv4", 00:16:07.312 "traddr": "10.0.0.2", 00:16:07.312 "trsvcid": "4420" 00:16:07.312 }, 00:16:07.312 "peer_address": { 00:16:07.312 "trtype": "TCP", 00:16:07.312 "adrfam": "IPv4", 00:16:07.312 "traddr": "10.0.0.1", 00:16:07.312 "trsvcid": "39860" 00:16:07.312 }, 00:16:07.312 "auth": { 00:16:07.312 "state": "completed", 00:16:07.312 "digest": "sha384", 00:16:07.312 "dhgroup": "ffdhe4096" 00:16:07.312 } 00:16:07.312 } 00:16:07.312 ]' 00:16:07.312 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.570 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.828 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:07.828 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.396 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.396 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.656 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.656 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.656 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.656 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.916 00:16:08.916 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.916 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.916 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.174 { 00:16:09.174 "cntlid": 81, 00:16:09.174 "qid": 0, 00:16:09.174 "state": "enabled", 00:16:09.174 "thread": "nvmf_tgt_poll_group_000", 00:16:09.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.174 "listen_address": { 00:16:09.174 "trtype": "TCP", 00:16:09.174 "adrfam": "IPv4", 00:16:09.174 "traddr": "10.0.0.2", 00:16:09.174 "trsvcid": "4420" 00:16:09.174 }, 00:16:09.174 "peer_address": { 00:16:09.174 "trtype": "TCP", 00:16:09.174 "adrfam": "IPv4", 00:16:09.174 "traddr": "10.0.0.1", 00:16:09.174 "trsvcid": "39892" 00:16:09.174 }, 00:16:09.174 "auth": { 00:16:09.174 "state": "completed", 00:16:09.174 "digest": "sha384", 00:16:09.174 "dhgroup": "ffdhe6144" 00:16:09.174 } 00:16:09.174 } 00:16:09.174 ]' 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.174 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.175 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.175 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.175 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.175 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.433 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:09.434 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.000 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.259 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.518 00:16:10.518 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.518 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.518 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.778 { 00:16:10.778 "cntlid": 83, 00:16:10.778 "qid": 0, 00:16:10.778 "state": "enabled", 00:16:10.778 "thread": "nvmf_tgt_poll_group_000", 00:16:10.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.778 "listen_address": { 00:16:10.778 "trtype": "TCP", 00:16:10.778 "adrfam": "IPv4", 00:16:10.778 "traddr": "10.0.0.2", 00:16:10.778 "trsvcid": "4420" 00:16:10.778 }, 00:16:10.778 "peer_address": { 00:16:10.778 "trtype": "TCP", 00:16:10.778 "adrfam": "IPv4", 00:16:10.778 "traddr": "10.0.0.1", 00:16:10.778 "trsvcid": "39908" 00:16:10.778 }, 00:16:10.778 "auth": { 00:16:10.778 "state": "completed", 00:16:10.778 "digest": "sha384", 00:16:10.778 "dhgroup": "ffdhe6144" 00:16:10.778 } 00:16:10.778 } 00:16:10.778 ]' 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.778 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.036 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.036 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.036 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.036 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:11.036 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.602 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.861 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.119 00:16:12.378 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.378 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.378 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.378 { 00:16:12.378 "cntlid": 85, 00:16:12.378 "qid": 0, 00:16:12.378 "state": "enabled", 00:16:12.378 "thread": "nvmf_tgt_poll_group_000", 00:16:12.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.378 "listen_address": { 00:16:12.378 "trtype": "TCP", 00:16:12.378 "adrfam": "IPv4", 00:16:12.378 "traddr": "10.0.0.2", 00:16:12.378 "trsvcid": "4420" 00:16:12.378 }, 00:16:12.378 "peer_address": { 00:16:12.378 "trtype": "TCP", 00:16:12.378 "adrfam": "IPv4", 00:16:12.378 "traddr": "10.0.0.1", 00:16:12.378 "trsvcid": "39952" 00:16:12.378 }, 00:16:12.378 "auth": { 00:16:12.378 "state": "completed", 00:16:12.378 "digest": "sha384", 00:16:12.378 "dhgroup": "ffdhe6144" 00:16:12.378 } 00:16:12.378 } 00:16:12.378 ]' 00:16:12.378 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.637 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.896 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:12.896 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.463 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.463 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:13.463 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.463 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.463 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:13.463 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.463 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.464 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.030 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.030 { 00:16:14.030 "cntlid": 87, 00:16:14.030 "qid": 0, 00:16:14.030 "state": "enabled", 00:16:14.030 "thread": "nvmf_tgt_poll_group_000", 00:16:14.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.030 "listen_address": { 00:16:14.030 "trtype": "TCP", 00:16:14.030 "adrfam": "IPv4", 00:16:14.030 "traddr": "10.0.0.2", 00:16:14.030 "trsvcid": "4420" 00:16:14.030 }, 00:16:14.030 "peer_address": { 00:16:14.030 "trtype": "TCP", 00:16:14.030 "adrfam": "IPv4", 00:16:14.030 "traddr": "10.0.0.1", 00:16:14.030 "trsvcid": "46504" 00:16:14.030 }, 00:16:14.030 "auth": { 00:16:14.030 "state": "completed", 00:16:14.030 "digest": "sha384", 00:16:14.030 "dhgroup": "ffdhe6144" 00:16:14.030 } 00:16:14.030 } 00:16:14.030 ]' 00:16:14.030 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.289 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.547 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:14.547 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.114 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.373 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.374 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.374 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.374 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.374 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.632 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.891 { 00:16:15.891 "cntlid": 89, 00:16:15.891 "qid": 0, 00:16:15.891 "state": "enabled", 00:16:15.891 "thread": "nvmf_tgt_poll_group_000", 00:16:15.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.891 "listen_address": { 00:16:15.891 "trtype": "TCP", 00:16:15.891 "adrfam": "IPv4", 00:16:15.891 "traddr": "10.0.0.2", 00:16:15.891 "trsvcid": "4420" 00:16:15.891 }, 00:16:15.891 "peer_address": { 00:16:15.891 "trtype": "TCP", 00:16:15.891 "adrfam": "IPv4", 00:16:15.891 "traddr": "10.0.0.1", 00:16:15.891 "trsvcid": "46526" 00:16:15.891 }, 00:16:15.891 "auth": { 00:16:15.891 "state": "completed", 00:16:15.891 "digest": "sha384", 00:16:15.891 "dhgroup": "ffdhe8192" 00:16:15.891 } 00:16:15.891 } 00:16:15.891 ]' 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.891 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.149 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.149 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.149 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.149 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.149 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.408 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:16.408 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:16.975 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.975 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.975 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.975 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.975 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.976 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.235 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.235 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.235 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.235 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.493 00:16:17.493 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.493 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.494 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.752 { 00:16:17.752 "cntlid": 91, 00:16:17.752 "qid": 0, 00:16:17.752 "state": "enabled", 00:16:17.752 "thread": "nvmf_tgt_poll_group_000", 00:16:17.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.752 "listen_address": { 00:16:17.752 "trtype": "TCP", 00:16:17.752 "adrfam": "IPv4", 00:16:17.752 "traddr": "10.0.0.2", 00:16:17.752 "trsvcid": "4420" 00:16:17.752 }, 00:16:17.752 "peer_address": { 00:16:17.752 "trtype": "TCP", 00:16:17.752 "adrfam": "IPv4", 00:16:17.752 "traddr": "10.0.0.1", 00:16:17.752 "trsvcid": "46554" 00:16:17.752 }, 00:16:17.752 "auth": { 00:16:17.752 "state": "completed", 00:16:17.752 "digest": "sha384", 00:16:17.752 "dhgroup": "ffdhe8192" 00:16:17.752 } 00:16:17.752 } 00:16:17.752 ]' 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.752 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.011 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.011 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.011 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.011 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.011 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.269 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:18.269 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.836 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.402 00:16:19.402 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.402 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.402 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.660 { 00:16:19.660 "cntlid": 93, 00:16:19.660 "qid": 0, 00:16:19.660 "state": "enabled", 00:16:19.660 "thread": "nvmf_tgt_poll_group_000", 00:16:19.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.660 "listen_address": { 00:16:19.660 "trtype": "TCP", 00:16:19.660 "adrfam": "IPv4", 00:16:19.660 "traddr": "10.0.0.2", 00:16:19.660 "trsvcid": "4420" 00:16:19.660 }, 00:16:19.660 "peer_address": { 00:16:19.660 "trtype": "TCP", 00:16:19.660 "adrfam": "IPv4", 00:16:19.660 "traddr": "10.0.0.1", 00:16:19.660 "trsvcid": "46588" 00:16:19.660 }, 00:16:19.660 "auth": { 00:16:19.660 "state": "completed", 00:16:19.660 "digest": "sha384", 00:16:19.660 "dhgroup": "ffdhe8192" 00:16:19.660 } 00:16:19.660 } 00:16:19.660 ]' 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.660 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.919 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.919 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.919 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.919 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:19.919 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:20.487 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.487 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.487 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.487 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.487 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:20.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.313 00:16:21.313 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.313 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.313 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.571 { 00:16:21.571 "cntlid": 95, 00:16:21.571 "qid": 0, 00:16:21.571 "state": "enabled", 00:16:21.571 "thread": "nvmf_tgt_poll_group_000", 00:16:21.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.571 "listen_address": { 00:16:21.571 "trtype": "TCP", 00:16:21.571 "adrfam": "IPv4", 00:16:21.571 "traddr": "10.0.0.2", 00:16:21.571 "trsvcid": "4420" 00:16:21.571 }, 00:16:21.571 "peer_address": { 00:16:21.571 "trtype": "TCP", 00:16:21.571 "adrfam": "IPv4", 00:16:21.571 "traddr": "10.0.0.1", 00:16:21.571 "trsvcid": "46624" 00:16:21.571 }, 00:16:21.571 "auth": { 00:16:21.571 "state": "completed", 00:16:21.571 "digest": "sha384", 00:16:21.571 "dhgroup": "ffdhe8192" 00:16:21.571 } 00:16:21.571 } 00:16:21.571 ]' 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.571 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.830 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:21.830 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.397 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.654 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.655 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.913 00:16:22.913 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.913 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.913 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.171 { 00:16:23.171 "cntlid": 97, 00:16:23.171 "qid": 0, 00:16:23.171 "state": "enabled", 00:16:23.171 "thread": "nvmf_tgt_poll_group_000", 00:16:23.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.171 "listen_address": { 00:16:23.171 "trtype": "TCP", 00:16:23.171 "adrfam": "IPv4", 00:16:23.171 "traddr": "10.0.0.2", 00:16:23.171 "trsvcid": "4420" 00:16:23.171 }, 00:16:23.171 "peer_address": { 00:16:23.171 "trtype": "TCP", 00:16:23.171 "adrfam": "IPv4", 00:16:23.171 "traddr": "10.0.0.1", 00:16:23.171 "trsvcid": "46644" 00:16:23.171 }, 00:16:23.171 "auth": { 00:16:23.171 "state": "completed", 00:16:23.171 "digest": "sha512", 00:16:23.171 "dhgroup": "null" 00:16:23.171 } 00:16:23.171 } 00:16:23.171 ]' 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.171 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.429 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:23.429 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.995 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.252 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.253 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.253 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.510 00:16:24.510 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.510 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.510 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.768 { 00:16:24.768 "cntlid": 99, 00:16:24.768 "qid": 0, 00:16:24.768 "state": "enabled", 00:16:24.768 "thread": "nvmf_tgt_poll_group_000", 00:16:24.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.768 "listen_address": { 00:16:24.768 "trtype": "TCP", 00:16:24.768 "adrfam": "IPv4", 00:16:24.768 "traddr": "10.0.0.2", 00:16:24.768 "trsvcid": "4420" 00:16:24.768 }, 00:16:24.768 "peer_address": { 00:16:24.768 "trtype": "TCP", 00:16:24.768 "adrfam": "IPv4", 00:16:24.768 "traddr": "10.0.0.1", 00:16:24.768 "trsvcid": "44158" 00:16:24.768 }, 00:16:24.768 "auth": { 00:16:24.768 "state": "completed", 00:16:24.768 "digest": "sha512", 00:16:24.768 "dhgroup": "null" 00:16:24.768 } 00:16:24.768 } 00:16:24.768 ]' 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.768 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.027 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:25.027 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.593 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.852 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.110 00:16:26.110 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.110 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.110 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.369 { 00:16:26.369 "cntlid": 101, 00:16:26.369 "qid": 0, 00:16:26.369 "state": "enabled", 00:16:26.369 "thread": "nvmf_tgt_poll_group_000", 00:16:26.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.369 "listen_address": { 00:16:26.369 "trtype": "TCP", 00:16:26.369 "adrfam": "IPv4", 00:16:26.369 "traddr": "10.0.0.2", 00:16:26.369 "trsvcid": "4420" 00:16:26.369 }, 00:16:26.369 "peer_address": { 00:16:26.369 "trtype": "TCP", 00:16:26.369 "adrfam": "IPv4", 00:16:26.369 "traddr": "10.0.0.1", 00:16:26.369 "trsvcid": "44196" 00:16:26.369 }, 00:16:26.369 "auth": { 00:16:26.369 "state": "completed", 00:16:26.369 "digest": "sha512", 00:16:26.369 "dhgroup": "null" 00:16:26.369 } 00:16:26.369 } 00:16:26.369 ]' 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.369 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.369 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.369 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.369 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.628 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:26.628 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.211 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.470 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:27.470 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.470 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.470 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.470 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.470 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.470 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.729 00:16:27.729 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.729 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.729 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.988 { 00:16:27.988 "cntlid": 103, 00:16:27.988 "qid": 0, 00:16:27.988 "state": "enabled", 00:16:27.988 "thread": "nvmf_tgt_poll_group_000", 00:16:27.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.988 "listen_address": { 00:16:27.988 "trtype": "TCP", 00:16:27.988 "adrfam": "IPv4", 00:16:27.988 "traddr": "10.0.0.2", 00:16:27.988 "trsvcid": "4420" 00:16:27.988 }, 00:16:27.988 "peer_address": { 00:16:27.988 "trtype": "TCP", 00:16:27.988 "adrfam": "IPv4", 00:16:27.988 "traddr": "10.0.0.1", 00:16:27.988 "trsvcid": "44226" 00:16:27.988 }, 00:16:27.988 "auth": { 00:16:27.988 "state": "completed", 00:16:27.988 "digest": "sha512", 00:16:27.988 "dhgroup": "null" 00:16:27.988 } 00:16:27.988 } 00:16:27.988 ]' 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.988 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.247 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:28.247 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.814 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.074 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.333 00:16:29.333 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.333 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.333 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.333 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.592 { 00:16:29.592 "cntlid": 105, 00:16:29.592 "qid": 0, 00:16:29.592 "state": "enabled", 00:16:29.592 "thread": "nvmf_tgt_poll_group_000", 00:16:29.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.592 "listen_address": { 00:16:29.592 "trtype": "TCP", 00:16:29.592 "adrfam": "IPv4", 00:16:29.592 "traddr": "10.0.0.2", 00:16:29.592 "trsvcid": "4420" 00:16:29.592 }, 00:16:29.592 "peer_address": { 00:16:29.592 "trtype": "TCP", 00:16:29.592 "adrfam": "IPv4", 00:16:29.592 "traddr": "10.0.0.1", 00:16:29.592 "trsvcid": "44256" 00:16:29.592 }, 00:16:29.592 "auth": { 00:16:29.592 "state": "completed", 00:16:29.592 "digest": "sha512", 00:16:29.592 "dhgroup": "ffdhe2048" 00:16:29.592 } 00:16:29.592 } 00:16:29.592 ]' 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.592 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.593 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.593 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.593 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.593 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.850 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:29.851 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.416 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.675 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.933 00:16:30.933 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.933 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.933 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.191 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.191 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.191 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.191 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.191 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.191 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.191 { 00:16:31.191 "cntlid": 107, 00:16:31.192 "qid": 0, 00:16:31.192 "state": "enabled", 00:16:31.192 "thread": "nvmf_tgt_poll_group_000", 00:16:31.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.192 "listen_address": { 00:16:31.192 "trtype": "TCP", 00:16:31.192 "adrfam": "IPv4", 00:16:31.192 "traddr": "10.0.0.2", 00:16:31.192 "trsvcid": "4420" 00:16:31.192 }, 00:16:31.192 "peer_address": { 00:16:31.192 "trtype": "TCP", 00:16:31.192 "adrfam": "IPv4", 00:16:31.192 "traddr": "10.0.0.1", 00:16:31.192 "trsvcid": "44282" 00:16:31.192 }, 00:16:31.192 "auth": { 00:16:31.192 "state": "completed", 00:16:31.192 "digest": "sha512", 00:16:31.192 "dhgroup": "ffdhe2048" 00:16:31.192 } 00:16:31.192 } 00:16:31.192 ]' 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.192 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.451 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:31.451 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.018 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.277 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.536 00:16:32.536 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.536 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.536 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.794 { 00:16:32.794 "cntlid": 109, 00:16:32.794 "qid": 0, 00:16:32.794 "state": "enabled", 00:16:32.794 "thread": "nvmf_tgt_poll_group_000", 00:16:32.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.794 "listen_address": { 00:16:32.794 "trtype": "TCP", 00:16:32.794 "adrfam": "IPv4", 00:16:32.794 "traddr": "10.0.0.2", 00:16:32.794 "trsvcid": "4420" 00:16:32.794 }, 00:16:32.794 "peer_address": { 00:16:32.794 "trtype": "TCP", 00:16:32.794 "adrfam": "IPv4", 00:16:32.794 "traddr": "10.0.0.1", 00:16:32.794 "trsvcid": "44306" 00:16:32.794 }, 00:16:32.794 "auth": { 00:16:32.794 "state": "completed", 00:16:32.794 "digest": "sha512", 00:16:32.794 "dhgroup": "ffdhe2048" 00:16:32.794 } 00:16:32.794 } 00:16:32.794 ]' 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.794 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.052 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:33.052 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.620 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.879 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.138 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.138 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.138 { 00:16:34.138 "cntlid": 111, 00:16:34.138 "qid": 0, 00:16:34.138 "state": "enabled", 00:16:34.138 "thread": "nvmf_tgt_poll_group_000", 00:16:34.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.138 "listen_address": { 00:16:34.138 "trtype": "TCP", 00:16:34.138 "adrfam": "IPv4", 00:16:34.138 "traddr": "10.0.0.2", 00:16:34.138 "trsvcid": "4420" 00:16:34.138 }, 00:16:34.138 "peer_address": { 00:16:34.138 "trtype": "TCP", 00:16:34.138 "adrfam": "IPv4", 00:16:34.138 "traddr": "10.0.0.1", 00:16:34.138 "trsvcid": "55450" 00:16:34.138 }, 00:16:34.138 "auth": { 00:16:34.138 "state": "completed", 00:16:34.138 "digest": "sha512", 00:16:34.138 "dhgroup": "ffdhe2048" 00:16:34.138 } 00:16:34.138 } 00:16:34.138 ]' 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.396 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.654 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:34.654 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.221 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.480 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.480 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.480 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.480 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.480 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.738 00:16:35.738 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.738 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.738 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.997 { 00:16:35.997 "cntlid": 113, 00:16:35.997 "qid": 0, 00:16:35.997 "state": "enabled", 00:16:35.997 "thread": "nvmf_tgt_poll_group_000", 00:16:35.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.997 "listen_address": { 00:16:35.997 "trtype": "TCP", 00:16:35.997 "adrfam": "IPv4", 00:16:35.997 "traddr": "10.0.0.2", 00:16:35.997 "trsvcid": "4420" 00:16:35.997 }, 00:16:35.997 "peer_address": { 00:16:35.997 "trtype": "TCP", 00:16:35.997 "adrfam": "IPv4", 00:16:35.997 "traddr": "10.0.0.1", 00:16:35.997 "trsvcid": "55486" 00:16:35.997 }, 00:16:35.997 "auth": { 00:16:35.997 "state": "completed", 00:16:35.997 "digest": "sha512", 00:16:35.997 "dhgroup": "ffdhe3072" 00:16:35.997 } 00:16:35.997 } 00:16:35.997 ]' 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.997 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.256 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:36.256 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.825 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.084 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.343 00:16:37.343 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.343 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.343 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.602 { 00:16:37.602 "cntlid": 115, 00:16:37.602 "qid": 0, 00:16:37.602 "state": "enabled", 00:16:37.602 "thread": "nvmf_tgt_poll_group_000", 00:16:37.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.602 "listen_address": { 00:16:37.602 "trtype": "TCP", 00:16:37.602 "adrfam": "IPv4", 00:16:37.602 "traddr": "10.0.0.2", 00:16:37.602 "trsvcid": "4420" 00:16:37.602 }, 00:16:37.602 "peer_address": { 00:16:37.602 "trtype": "TCP", 00:16:37.602 "adrfam": "IPv4", 00:16:37.602 "traddr": "10.0.0.1", 00:16:37.602 "trsvcid": "55502" 00:16:37.602 }, 00:16:37.602 "auth": { 00:16:37.602 "state": "completed", 00:16:37.602 "digest": "sha512", 00:16:37.602 "dhgroup": "ffdhe3072" 00:16:37.602 } 00:16:37.602 } 00:16:37.602 ]' 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.602 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.861 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:37.861 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:38.429 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.429 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.429 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.429 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.429 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.429 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.429 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.429 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.689 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.022 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.022 { 00:16:39.022 "cntlid": 117, 00:16:39.022 "qid": 0, 00:16:39.022 "state": "enabled", 00:16:39.022 "thread": "nvmf_tgt_poll_group_000", 00:16:39.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.022 "listen_address": { 00:16:39.022 "trtype": "TCP", 00:16:39.022 "adrfam": "IPv4", 00:16:39.022 "traddr": "10.0.0.2", 00:16:39.022 "trsvcid": "4420" 00:16:39.022 }, 00:16:39.022 "peer_address": { 00:16:39.022 "trtype": "TCP", 00:16:39.022 "adrfam": "IPv4", 00:16:39.022 "traddr": "10.0.0.1", 00:16:39.022 "trsvcid": "55534" 00:16:39.022 }, 00:16:39.022 "auth": { 00:16:39.022 "state": "completed", 00:16:39.022 "digest": "sha512", 00:16:39.022 "dhgroup": "ffdhe3072" 00:16:39.022 } 00:16:39.022 } 00:16:39.022 ]' 00:16:39.022 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.338 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.338 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:39.338 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.906 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.164 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.423 00:16:40.423 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.423 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.423 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.681 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.681 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.681 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.681 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.682 { 00:16:40.682 "cntlid": 119, 00:16:40.682 "qid": 0, 00:16:40.682 "state": "enabled", 00:16:40.682 "thread": "nvmf_tgt_poll_group_000", 00:16:40.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.682 "listen_address": { 00:16:40.682 "trtype": "TCP", 00:16:40.682 "adrfam": "IPv4", 00:16:40.682 "traddr": "10.0.0.2", 00:16:40.682 "trsvcid": "4420" 00:16:40.682 }, 00:16:40.682 "peer_address": { 00:16:40.682 "trtype": "TCP", 00:16:40.682 "adrfam": "IPv4", 00:16:40.682 "traddr": "10.0.0.1", 00:16:40.682 "trsvcid": "55572" 00:16:40.682 }, 00:16:40.682 "auth": { 00:16:40.682 "state": "completed", 00:16:40.682 "digest": "sha512", 00:16:40.682 "dhgroup": "ffdhe3072" 00:16:40.682 } 00:16:40.682 } 00:16:40.682 ]' 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.682 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.940 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:40.940 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.507 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.766 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.024 00:16:42.024 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.024 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.024 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.282 { 00:16:42.282 "cntlid": 121, 00:16:42.282 "qid": 0, 00:16:42.282 "state": "enabled", 00:16:42.282 "thread": "nvmf_tgt_poll_group_000", 00:16:42.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.282 "listen_address": { 00:16:42.282 "trtype": "TCP", 00:16:42.282 "adrfam": "IPv4", 00:16:42.282 "traddr": "10.0.0.2", 00:16:42.282 "trsvcid": "4420" 00:16:42.282 }, 00:16:42.282 "peer_address": { 00:16:42.282 "trtype": "TCP", 00:16:42.282 "adrfam": "IPv4", 00:16:42.282 "traddr": "10.0.0.1", 00:16:42.282 "trsvcid": "55598" 00:16:42.282 }, 00:16:42.282 "auth": { 00:16:42.282 "state": "completed", 00:16:42.282 "digest": "sha512", 00:16:42.282 "dhgroup": "ffdhe4096" 00:16:42.282 } 00:16:42.282 } 00:16:42.282 ]' 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.282 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.283 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.283 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.283 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.541 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:42.541 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.108 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.366 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.624 00:16:43.624 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.624 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.624 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.882 { 00:16:43.882 "cntlid": 123, 00:16:43.882 "qid": 0, 00:16:43.882 "state": "enabled", 00:16:43.882 "thread": "nvmf_tgt_poll_group_000", 00:16:43.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.882 "listen_address": { 00:16:43.882 "trtype": "TCP", 00:16:43.882 "adrfam": "IPv4", 00:16:43.882 "traddr": "10.0.0.2", 00:16:43.882 "trsvcid": "4420" 00:16:43.882 }, 00:16:43.882 "peer_address": { 00:16:43.882 "trtype": "TCP", 00:16:43.882 "adrfam": "IPv4", 00:16:43.882 "traddr": "10.0.0.1", 00:16:43.882 "trsvcid": "57678" 00:16:43.882 }, 00:16:43.882 "auth": { 00:16:43.882 "state": "completed", 00:16:43.882 "digest": "sha512", 00:16:43.882 "dhgroup": "ffdhe4096" 00:16:43.882 } 00:16:43.882 } 00:16:43.882 ]' 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.882 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.141 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.141 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.141 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.141 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:44.141 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:44.707 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.966 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.224 00:16:45.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.482 { 00:16:45.482 "cntlid": 125, 00:16:45.482 "qid": 0, 00:16:45.482 "state": "enabled", 00:16:45.482 "thread": "nvmf_tgt_poll_group_000", 00:16:45.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.482 "listen_address": { 00:16:45.482 "trtype": "TCP", 00:16:45.482 "adrfam": "IPv4", 00:16:45.482 "traddr": "10.0.0.2", 00:16:45.482 "trsvcid": "4420" 00:16:45.482 }, 00:16:45.482 "peer_address": { 00:16:45.482 "trtype": "TCP", 00:16:45.482 "adrfam": "IPv4", 00:16:45.482 "traddr": "10.0.0.1", 00:16:45.482 "trsvcid": "57706" 00:16:45.482 }, 00:16:45.482 "auth": { 00:16:45.482 "state": "completed", 00:16:45.482 "digest": "sha512", 00:16:45.482 "dhgroup": "ffdhe4096" 00:16:45.482 } 00:16:45.482 } 00:16:45.482 ]' 00:16:45.482 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.741 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.999 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:45.999 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:46.565 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.566 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.824 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.083 { 00:16:47.083 "cntlid": 127, 00:16:47.083 "qid": 0, 00:16:47.083 "state": "enabled", 00:16:47.083 "thread": "nvmf_tgt_poll_group_000", 00:16:47.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.083 "listen_address": { 00:16:47.083 "trtype": "TCP", 00:16:47.083 "adrfam": "IPv4", 00:16:47.083 "traddr": "10.0.0.2", 00:16:47.083 "trsvcid": "4420" 00:16:47.083 }, 00:16:47.083 "peer_address": { 00:16:47.083 "trtype": "TCP", 00:16:47.083 "adrfam": "IPv4", 00:16:47.083 "traddr": "10.0.0.1", 00:16:47.083 "trsvcid": "57726" 00:16:47.083 }, 00:16:47.083 "auth": { 00:16:47.083 "state": "completed", 00:16:47.083 "digest": "sha512", 00:16:47.083 "dhgroup": "ffdhe4096" 00:16:47.083 } 00:16:47.083 } 00:16:47.083 ]' 00:16:47.083 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.341 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.600 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:47.600 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:48.166 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.166 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.166 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.166 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.166 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.166 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.167 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.167 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.167 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.425 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.686 00:16:48.686 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.686 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.686 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.944 { 00:16:48.944 "cntlid": 129, 00:16:48.944 "qid": 0, 00:16:48.944 "state": "enabled", 00:16:48.944 "thread": "nvmf_tgt_poll_group_000", 00:16:48.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.944 "listen_address": { 00:16:48.944 "trtype": "TCP", 00:16:48.944 "adrfam": "IPv4", 00:16:48.944 "traddr": "10.0.0.2", 00:16:48.944 "trsvcid": "4420" 00:16:48.944 }, 00:16:48.944 "peer_address": { 00:16:48.944 "trtype": "TCP", 00:16:48.944 "adrfam": "IPv4", 00:16:48.944 "traddr": "10.0.0.1", 00:16:48.944 "trsvcid": "57744" 00:16:48.944 }, 00:16:48.944 "auth": { 00:16:48.944 "state": "completed", 00:16:48.944 "digest": "sha512", 00:16:48.944 "dhgroup": "ffdhe6144" 00:16:48.944 } 00:16:48.944 } 00:16:48.944 ]' 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.944 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.203 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:49.203 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.770 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.028 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.029 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.029 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.287 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.545 { 00:16:50.545 "cntlid": 131, 00:16:50.545 "qid": 0, 00:16:50.545 "state": "enabled", 00:16:50.545 "thread": "nvmf_tgt_poll_group_000", 00:16:50.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.545 "listen_address": { 00:16:50.545 "trtype": "TCP", 00:16:50.545 "adrfam": "IPv4", 00:16:50.545 "traddr": "10.0.0.2", 00:16:50.545 "trsvcid": "4420" 00:16:50.545 }, 00:16:50.545 "peer_address": { 00:16:50.545 "trtype": "TCP", 00:16:50.545 "adrfam": "IPv4", 00:16:50.545 "traddr": "10.0.0.1", 00:16:50.545 "trsvcid": "57768" 00:16:50.545 }, 00:16:50.545 "auth": { 00:16:50.545 "state": "completed", 00:16:50.545 "digest": "sha512", 00:16:50.545 "dhgroup": "ffdhe6144" 00:16:50.545 } 00:16:50.545 } 00:16:50.545 ]' 00:16:50.545 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.829 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.088 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:51.088 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.654 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.220 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.220 { 00:16:52.220 "cntlid": 133, 00:16:52.220 "qid": 0, 00:16:52.220 "state": "enabled", 00:16:52.220 "thread": "nvmf_tgt_poll_group_000", 00:16:52.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.220 "listen_address": { 00:16:52.220 "trtype": "TCP", 00:16:52.220 "adrfam": "IPv4", 00:16:52.220 "traddr": "10.0.0.2", 00:16:52.220 "trsvcid": "4420" 00:16:52.220 }, 00:16:52.220 "peer_address": { 00:16:52.220 "trtype": "TCP", 00:16:52.220 "adrfam": "IPv4", 00:16:52.220 "traddr": "10.0.0.1", 00:16:52.220 "trsvcid": "57786" 00:16:52.220 }, 00:16:52.220 "auth": { 00:16:52.220 "state": "completed", 00:16:52.220 "digest": "sha512", 00:16:52.220 "dhgroup": "ffdhe6144" 00:16:52.220 } 00:16:52.220 } 00:16:52.220 ]' 00:16:52.220 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.509 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.509 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.509 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.509 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.509 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.509 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.509 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.768 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:52.768 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.335 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.335 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.593 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.593 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.593 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.851 00:16:53.851 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.851 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.851 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.109 { 00:16:54.109 "cntlid": 135, 00:16:54.109 "qid": 0, 00:16:54.109 "state": "enabled", 00:16:54.109 "thread": "nvmf_tgt_poll_group_000", 00:16:54.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.109 "listen_address": { 00:16:54.109 "trtype": "TCP", 00:16:54.109 "adrfam": "IPv4", 00:16:54.109 "traddr": "10.0.0.2", 00:16:54.109 "trsvcid": "4420" 00:16:54.109 }, 00:16:54.109 "peer_address": { 00:16:54.109 "trtype": "TCP", 00:16:54.109 "adrfam": "IPv4", 00:16:54.109 "traddr": "10.0.0.1", 00:16:54.109 "trsvcid": "52368" 00:16:54.109 }, 00:16:54.109 "auth": { 00:16:54.109 "state": "completed", 00:16:54.109 "digest": "sha512", 00:16:54.109 "dhgroup": "ffdhe6144" 00:16:54.109 } 00:16:54.109 } 00:16:54.109 ]' 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.109 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.367 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:54.367 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.932 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.191 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.191 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.191 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.191 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.191 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.191 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.757 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.757 { 00:16:55.757 "cntlid": 137, 00:16:55.757 "qid": 0, 00:16:55.757 "state": "enabled", 00:16:55.757 "thread": "nvmf_tgt_poll_group_000", 00:16:55.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.757 "listen_address": { 00:16:55.757 "trtype": "TCP", 00:16:55.757 "adrfam": "IPv4", 00:16:55.757 "traddr": "10.0.0.2", 00:16:55.757 "trsvcid": "4420" 00:16:55.757 }, 00:16:55.757 "peer_address": { 00:16:55.757 "trtype": "TCP", 00:16:55.757 "adrfam": "IPv4", 00:16:55.757 "traddr": "10.0.0.1", 00:16:55.757 "trsvcid": "52394" 00:16:55.757 }, 00:16:55.757 "auth": { 00:16:55.757 "state": "completed", 00:16:55.757 "digest": "sha512", 00:16:55.757 "dhgroup": "ffdhe8192" 00:16:55.757 } 00:16:55.757 } 00:16:55.757 ]' 00:16:55.757 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.016 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.274 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:56.274 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.841 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.099 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.358 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.616 { 00:16:57.616 "cntlid": 139, 00:16:57.616 "qid": 0, 00:16:57.616 "state": "enabled", 00:16:57.616 "thread": "nvmf_tgt_poll_group_000", 00:16:57.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.616 "listen_address": { 00:16:57.616 "trtype": "TCP", 00:16:57.616 "adrfam": "IPv4", 00:16:57.616 "traddr": "10.0.0.2", 00:16:57.616 "trsvcid": "4420" 00:16:57.616 }, 00:16:57.616 "peer_address": { 00:16:57.616 "trtype": "TCP", 00:16:57.616 "adrfam": "IPv4", 00:16:57.616 "traddr": "10.0.0.1", 00:16:57.616 "trsvcid": "52422" 00:16:57.616 }, 00:16:57.616 "auth": { 00:16:57.616 "state": "completed", 00:16:57.616 "digest": "sha512", 00:16:57.616 "dhgroup": "ffdhe8192" 00:16:57.616 } 00:16:57.616 } 00:16:57.616 ]' 00:16:57.616 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.873 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.131 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:58.131 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: --dhchap-ctrl-secret DHHC-1:02:NjNkMTcyZjA4YWJiY2YxZjU1MDJmNTA3MWQ2ODU5Y2FlZjMyYTZiNjc0MGQ3ZjE1C+6Org==: 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.698 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.957 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.215 00:16:59.474 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.474 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.474 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.474 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.474 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.474 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.474 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.474 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.474 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.474 { 00:16:59.474 "cntlid": 141, 00:16:59.474 "qid": 0, 00:16:59.474 "state": "enabled", 00:16:59.474 "thread": "nvmf_tgt_poll_group_000", 00:16:59.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.474 "listen_address": { 00:16:59.474 "trtype": "TCP", 00:16:59.474 "adrfam": "IPv4", 00:16:59.474 "traddr": "10.0.0.2", 00:16:59.474 "trsvcid": "4420" 00:16:59.474 }, 00:16:59.474 "peer_address": { 00:16:59.474 "trtype": "TCP", 00:16:59.474 "adrfam": "IPv4", 00:16:59.475 "traddr": "10.0.0.1", 00:16:59.475 "trsvcid": "52440" 00:16:59.475 }, 00:16:59.475 "auth": { 00:16:59.475 "state": "completed", 00:16:59.475 "digest": "sha512", 00:16:59.475 "dhgroup": "ffdhe8192" 00:16:59.475 } 00:16:59.475 } 00:16:59.475 ]' 00:16:59.475 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.475 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.475 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:16:59.733 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:01:OWFlMzIwNjFjYzNlMWJlZjIwNzRjYWM3OTMyOTFhYWFDR7OM: 00:17:00.299 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.557 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.558 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.558 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.558 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.124 00:17:01.124 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.124 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.124 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.382 { 00:17:01.382 "cntlid": 143, 00:17:01.382 "qid": 0, 00:17:01.382 "state": "enabled", 00:17:01.382 "thread": "nvmf_tgt_poll_group_000", 00:17:01.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.382 "listen_address": { 00:17:01.382 "trtype": "TCP", 00:17:01.382 "adrfam": "IPv4", 00:17:01.382 "traddr": "10.0.0.2", 00:17:01.382 "trsvcid": "4420" 00:17:01.382 }, 00:17:01.382 "peer_address": { 00:17:01.382 "trtype": "TCP", 00:17:01.382 "adrfam": "IPv4", 00:17:01.382 "traddr": "10.0.0.1", 00:17:01.382 "trsvcid": "52478" 00:17:01.382 }, 00:17:01.382 "auth": { 00:17:01.382 "state": "completed", 00:17:01.382 "digest": "sha512", 00:17:01.382 "dhgroup": "ffdhe8192" 00:17:01.382 } 00:17:01.382 } 00:17:01.382 ]' 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.382 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.382 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.382 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.382 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.382 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.382 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.382 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.640 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:17:01.640 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.206 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.464 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:02.464 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.464 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.464 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.464 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.465 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.031 00:17:03.031 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.031 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.031 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.289 { 00:17:03.289 "cntlid": 145, 00:17:03.289 "qid": 0, 00:17:03.289 "state": "enabled", 00:17:03.289 "thread": "nvmf_tgt_poll_group_000", 00:17:03.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.289 "listen_address": { 00:17:03.289 "trtype": "TCP", 00:17:03.289 "adrfam": "IPv4", 00:17:03.289 "traddr": "10.0.0.2", 00:17:03.289 "trsvcid": "4420" 00:17:03.289 }, 00:17:03.289 "peer_address": { 00:17:03.289 "trtype": "TCP", 00:17:03.289 "adrfam": "IPv4", 00:17:03.289 "traddr": "10.0.0.1", 00:17:03.289 "trsvcid": "52510" 00:17:03.289 }, 00:17:03.289 "auth": { 00:17:03.289 "state": "completed", 00:17:03.289 "digest": "sha512", 00:17:03.289 "dhgroup": "ffdhe8192" 00:17:03.289 } 00:17:03.289 } 00:17:03.289 ]' 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.289 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.548 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:17:03.548 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VhYzUwODkyYjEzZWIxNzhiMDdmMTNhNTc3MDcwNTQ2ODgzODRmZDQ1MGNhNmYzLZGqyQ==: --dhchap-ctrl-secret DHHC-1:03:OGY1MDk0ZGYwZDMzNWVkY2NmMWRlOWZlMDRhYWViZTcwZDg1NWVlNmE3MTc5MTYyYWVmMTQ0MmVkZGI1ZDQ2OVcjPAI=: 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:04.115 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:04.682 request: 00:17:04.682 { 00:17:04.682 "name": "nvme0", 00:17:04.682 "trtype": "tcp", 00:17:04.682 "traddr": "10.0.0.2", 00:17:04.682 "adrfam": "ipv4", 00:17:04.682 "trsvcid": "4420", 00:17:04.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.682 "prchk_reftag": false, 00:17:04.682 "prchk_guard": false, 00:17:04.682 "hdgst": false, 00:17:04.682 "ddgst": false, 00:17:04.682 "dhchap_key": "key2", 00:17:04.682 "allow_unrecognized_csi": false, 00:17:04.682 "method": "bdev_nvme_attach_controller", 00:17:04.682 "req_id": 1 00:17:04.682 } 00:17:04.682 Got JSON-RPC error response 00:17:04.682 response: 00:17:04.682 { 00:17:04.682 "code": -5, 00:17:04.682 "message": "Input/output error" 00:17:04.682 } 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.682 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.683 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.941 request: 00:17:04.941 { 00:17:04.941 "name": "nvme0", 00:17:04.941 "trtype": "tcp", 00:17:04.941 "traddr": "10.0.0.2", 00:17:04.941 "adrfam": "ipv4", 00:17:04.941 "trsvcid": "4420", 00:17:04.941 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.941 "prchk_reftag": false, 00:17:04.941 "prchk_guard": false, 00:17:04.941 "hdgst": false, 00:17:04.941 "ddgst": false, 00:17:04.941 "dhchap_key": "key1", 00:17:04.941 "dhchap_ctrlr_key": "ckey2", 00:17:04.941 "allow_unrecognized_csi": false, 00:17:04.941 "method": "bdev_nvme_attach_controller", 00:17:04.941 "req_id": 1 00:17:04.941 } 00:17:04.941 Got JSON-RPC error response 00:17:04.941 response: 00:17:04.941 { 00:17:04.941 "code": -5, 00:17:04.941 "message": "Input/output error" 00:17:04.941 } 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.941 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.199 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.458 request: 00:17:05.458 { 00:17:05.458 "name": "nvme0", 00:17:05.458 "trtype": "tcp", 00:17:05.458 "traddr": "10.0.0.2", 00:17:05.458 "adrfam": "ipv4", 00:17:05.458 "trsvcid": "4420", 00:17:05.458 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.458 "prchk_reftag": false, 00:17:05.458 "prchk_guard": false, 00:17:05.458 "hdgst": false, 00:17:05.458 "ddgst": false, 00:17:05.458 "dhchap_key": "key1", 00:17:05.458 "dhchap_ctrlr_key": "ckey1", 00:17:05.458 "allow_unrecognized_csi": false, 00:17:05.458 "method": "bdev_nvme_attach_controller", 00:17:05.458 "req_id": 1 00:17:05.458 } 00:17:05.458 Got JSON-RPC error response 00:17:05.458 response: 00:17:05.458 { 00:17:05.458 "code": -5, 00:17:05.458 "message": "Input/output error" 00:17:05.458 } 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3465446 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3465446 ']' 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3465446 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.458 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465446 00:17:05.717 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465446' 00:17:05.718 killing process with pid 3465446 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3465446 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3465446 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3487836 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3487836 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3487836 ']' 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.718 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3487836 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3487836 ']' 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.976 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 null0 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bR5 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.du2 ]] 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.du2 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4A2 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.OMt ]] 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OMt 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lKU 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.9Yj ]] 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Yj 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.haz 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.493 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.494 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.494 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.494 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.494 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.494 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.494 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.494 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.494 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.060 nvme0n1 00:17:07.060 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.060 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.060 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.318 { 00:17:07.318 "cntlid": 1, 00:17:07.318 "qid": 0, 00:17:07.318 "state": "enabled", 00:17:07.318 "thread": "nvmf_tgt_poll_group_000", 00:17:07.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.318 "listen_address": { 00:17:07.318 "trtype": "TCP", 00:17:07.318 "adrfam": "IPv4", 00:17:07.318 "traddr": "10.0.0.2", 00:17:07.318 "trsvcid": "4420" 00:17:07.318 }, 00:17:07.318 "peer_address": { 00:17:07.318 "trtype": "TCP", 00:17:07.318 "adrfam": "IPv4", 00:17:07.318 "traddr": "10.0.0.1", 00:17:07.318 "trsvcid": "51518" 00:17:07.318 }, 00:17:07.318 "auth": { 00:17:07.318 "state": "completed", 00:17:07.318 "digest": "sha512", 00:17:07.318 "dhgroup": "ffdhe8192" 00:17:07.318 } 00:17:07.318 } 00:17:07.318 ]' 00:17:07.318 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.318 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.318 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.577 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.577 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.577 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.577 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.577 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.577 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:17:07.578 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:17:08.144 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:08.402 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:08.402 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.402 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.402 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.402 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.403 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.403 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.403 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.403 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.403 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.403 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.663 request: 00:17:08.663 { 00:17:08.663 "name": "nvme0", 00:17:08.663 "trtype": "tcp", 00:17:08.663 "traddr": "10.0.0.2", 00:17:08.663 "adrfam": "ipv4", 00:17:08.663 "trsvcid": "4420", 00:17:08.663 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.663 "prchk_reftag": false, 00:17:08.663 "prchk_guard": false, 00:17:08.663 "hdgst": false, 00:17:08.663 "ddgst": false, 00:17:08.663 "dhchap_key": "key3", 00:17:08.663 "allow_unrecognized_csi": false, 00:17:08.663 "method": "bdev_nvme_attach_controller", 00:17:08.663 "req_id": 1 00:17:08.663 } 00:17:08.663 Got JSON-RPC error response 00:17:08.663 response: 00:17:08.663 { 00:17:08.663 "code": -5, 00:17:08.664 "message": "Input/output error" 00:17:08.664 } 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.664 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.926 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.926 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.926 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.926 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.926 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.927 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.927 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.927 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.927 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.927 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.185 request: 00:17:09.185 { 00:17:09.185 "name": "nvme0", 00:17:09.185 "trtype": "tcp", 00:17:09.185 "traddr": "10.0.0.2", 00:17:09.185 "adrfam": "ipv4", 00:17:09.185 "trsvcid": "4420", 00:17:09.185 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.185 "prchk_reftag": false, 00:17:09.185 "prchk_guard": false, 00:17:09.185 "hdgst": false, 00:17:09.185 "ddgst": false, 00:17:09.185 "dhchap_key": "key3", 00:17:09.185 "allow_unrecognized_csi": false, 00:17:09.185 "method": "bdev_nvme_attach_controller", 00:17:09.185 "req_id": 1 00:17:09.185 } 00:17:09.185 Got JSON-RPC error response 00:17:09.185 response: 00:17:09.185 { 00:17:09.185 "code": -5, 00:17:09.185 "message": "Input/output error" 00:17:09.185 } 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.185 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.444 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.703 request: 00:17:09.703 { 00:17:09.703 "name": "nvme0", 00:17:09.703 "trtype": "tcp", 00:17:09.703 "traddr": "10.0.0.2", 00:17:09.703 "adrfam": "ipv4", 00:17:09.703 "trsvcid": "4420", 00:17:09.703 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.703 "prchk_reftag": false, 00:17:09.703 "prchk_guard": false, 00:17:09.703 "hdgst": false, 00:17:09.703 "ddgst": false, 00:17:09.703 "dhchap_key": "key0", 00:17:09.703 "dhchap_ctrlr_key": "key1", 00:17:09.703 "allow_unrecognized_csi": false, 00:17:09.703 "method": "bdev_nvme_attach_controller", 00:17:09.703 "req_id": 1 00:17:09.703 } 00:17:09.703 Got JSON-RPC error response 00:17:09.703 response: 00:17:09.703 { 00:17:09.703 "code": -5, 00:17:09.703 "message": "Input/output error" 00:17:09.703 } 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.703 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.962 nvme0n1 00:17:09.962 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:09.962 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:09.962 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.221 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.221 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.221 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.479 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:11.045 nvme0n1 00:17:11.045 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:11.045 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:11.045 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:11.303 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.560 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.560 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:17:11.561 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: --dhchap-ctrl-secret DHHC-1:03:YTcxZTJlYTk3ZTgyZWNmMDQ4MjBkYjRiZTY3OGUxYTA1ZTMxMWEzYzdiNWEzNTcyOWU1YWMxZjBkOTllMWI2NCNyHzY=: 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.126 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.385 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.952 request: 00:17:12.952 { 00:17:12.952 "name": "nvme0", 00:17:12.952 "trtype": "tcp", 00:17:12.952 "traddr": "10.0.0.2", 00:17:12.952 "adrfam": "ipv4", 00:17:12.952 "trsvcid": "4420", 00:17:12.952 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.952 "prchk_reftag": false, 00:17:12.952 "prchk_guard": false, 00:17:12.952 "hdgst": false, 00:17:12.952 "ddgst": false, 00:17:12.952 "dhchap_key": "key1", 00:17:12.952 "allow_unrecognized_csi": false, 00:17:12.952 "method": "bdev_nvme_attach_controller", 00:17:12.952 "req_id": 1 00:17:12.952 } 00:17:12.952 Got JSON-RPC error response 00:17:12.952 response: 00:17:12.952 { 00:17:12.952 "code": -5, 00:17:12.952 "message": "Input/output error" 00:17:12.952 } 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.952 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.519 nvme0n1 00:17:13.519 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:13.519 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.519 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:13.777 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.777 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.777 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:14.035 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:14.293 nvme0n1 00:17:14.293 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:14.293 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:14.293 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: '' 2s 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: ]] 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGM1M2Y5NmI5NmM4Zjc5NDFlNDJlZmU0YzM0MDAxYWKEtfkR: 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.550 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: 2s 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: ]] 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmVmZTI0NzZlYmQyYzEwZDViNzdjNTMxMWViYmUxMWRmMmZhOTM1ZWM5ODFmMWQwkEieYw==: 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:17.083 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.054 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.620 nvme0n1 00:17:19.620 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.621 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.621 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.621 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.621 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.621 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:20.188 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:20.446 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:20.446 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.446 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.706 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.273 request: 00:17:21.273 { 00:17:21.273 "name": "nvme0", 00:17:21.273 "dhchap_key": "key1", 00:17:21.273 "dhchap_ctrlr_key": "key3", 00:17:21.273 "method": "bdev_nvme_set_keys", 00:17:21.273 "req_id": 1 00:17:21.273 } 00:17:21.273 Got JSON-RPC error response 00:17:21.273 response: 00:17:21.273 { 00:17:21.273 "code": -13, 00:17:21.273 "message": "Permission denied" 00:17:21.273 } 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:21.273 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:22.209 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:22.209 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.209 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:22.467 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.468 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.403 nvme0n1 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.403 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.661 request: 00:17:23.661 { 00:17:23.661 "name": "nvme0", 00:17:23.661 "dhchap_key": "key2", 00:17:23.661 "dhchap_ctrlr_key": "key0", 00:17:23.661 "method": "bdev_nvme_set_keys", 00:17:23.661 "req_id": 1 00:17:23.661 } 00:17:23.661 Got JSON-RPC error response 00:17:23.661 response: 00:17:23.661 { 00:17:23.661 "code": -13, 00:17:23.661 "message": "Permission denied" 00:17:23.661 } 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.661 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:23.920 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:23.920 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3465617 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3465617 ']' 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3465617 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465617 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465617' 00:17:25.296 killing process with pid 3465617 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3465617 00:17:25.296 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3465617 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.555 rmmod nvme_tcp 00:17:25.555 rmmod nvme_fabrics 00:17:25.555 rmmod nvme_keyring 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3487836 ']' 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3487836 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3487836 ']' 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3487836 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3487836 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3487836' 00:17:25.555 killing process with pid 3487836 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3487836 00:17:25.555 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3487836 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.815 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bR5 /tmp/spdk.key-sha256.4A2 /tmp/spdk.key-sha384.lKU /tmp/spdk.key-sha512.haz /tmp/spdk.key-sha512.du2 /tmp/spdk.key-sha384.OMt /tmp/spdk.key-sha256.9Yj '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:28.374 00:17:28.374 real 2m33.752s 00:17:28.374 user 5m54.756s 00:17:28.374 sys 0m24.287s 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 ************************************ 00:17:28.374 END TEST nvmf_auth_target 00:17:28.374 ************************************ 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 ************************************ 00:17:28.374 START TEST nvmf_bdevio_no_huge 00:17:28.374 ************************************ 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.374 * Looking for test storage... 00:17:28.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.374 --rc genhtml_branch_coverage=1 00:17:28.374 --rc genhtml_function_coverage=1 00:17:28.374 --rc genhtml_legend=1 00:17:28.374 --rc geninfo_all_blocks=1 00:17:28.374 --rc geninfo_unexecuted_blocks=1 00:17:28.374 00:17:28.374 ' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.374 --rc genhtml_branch_coverage=1 00:17:28.374 --rc genhtml_function_coverage=1 00:17:28.374 --rc genhtml_legend=1 00:17:28.374 --rc geninfo_all_blocks=1 00:17:28.374 --rc geninfo_unexecuted_blocks=1 00:17:28.374 00:17:28.374 ' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.374 --rc genhtml_branch_coverage=1 00:17:28.374 --rc genhtml_function_coverage=1 00:17:28.374 --rc genhtml_legend=1 00:17:28.374 --rc geninfo_all_blocks=1 00:17:28.374 --rc geninfo_unexecuted_blocks=1 00:17:28.374 00:17:28.374 ' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.374 --rc genhtml_branch_coverage=1 00:17:28.374 --rc genhtml_function_coverage=1 00:17:28.374 --rc genhtml_legend=1 00:17:28.374 --rc geninfo_all_blocks=1 00:17:28.374 --rc geninfo_unexecuted_blocks=1 00:17:28.374 00:17:28.374 ' 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.374 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.375 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:34.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:34.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:34.943 Found net devices under 0000:86:00.0: cvl_0_0 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:34.943 Found net devices under 0000:86:00.1: cvl_0_1 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.943 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:17:34.944 00:17:34.944 --- 10.0.0.2 ping statistics --- 00:17:34.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.944 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:17:34.944 00:17:34.944 --- 10.0.0.1 ping statistics --- 00:17:34.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.944 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3494724 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3494724 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3494724 ']' 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.944 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.944 [2024-11-20 10:34:34.795289] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:17:34.944 [2024-11-20 10:34:34.795341] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:34.944 [2024-11-20 10:34:34.883078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.944 [2024-11-20 10:34:34.930513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.944 [2024-11-20 10:34:34.930549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.944 [2024-11-20 10:34:34.930558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.944 [2024-11-20 10:34:34.930564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.944 [2024-11-20 10:34:34.930569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.944 [2024-11-20 10:34:34.931674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.944 [2024-11-20 10:34:34.931780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:34.944 [2024-11-20 10:34:34.931888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.944 [2024-11-20 10:34:34.931889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:34.944 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.944 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:34.944 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.944 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.944 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.203 [2024-11-20 10:34:35.694940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.203 Malloc0 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.203 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.204 [2024-11-20 10:34:35.735207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.204 { 00:17:35.204 "params": { 00:17:35.204 "name": "Nvme$subsystem", 00:17:35.204 "trtype": "$TEST_TRANSPORT", 00:17:35.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.204 "adrfam": "ipv4", 00:17:35.204 "trsvcid": "$NVMF_PORT", 00:17:35.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.204 "hdgst": ${hdgst:-false}, 00:17:35.204 "ddgst": ${ddgst:-false} 00:17:35.204 }, 00:17:35.204 "method": "bdev_nvme_attach_controller" 00:17:35.204 } 00:17:35.204 EOF 00:17:35.204 )") 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:35.204 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:35.204 "params": { 00:17:35.204 "name": "Nvme1", 00:17:35.204 "trtype": "tcp", 00:17:35.204 "traddr": "10.0.0.2", 00:17:35.204 "adrfam": "ipv4", 00:17:35.204 "trsvcid": "4420", 00:17:35.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.204 "hdgst": false, 00:17:35.204 "ddgst": false 00:17:35.204 }, 00:17:35.204 "method": "bdev_nvme_attach_controller" 00:17:35.204 }' 00:17:35.204 [2024-11-20 10:34:35.787275] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:17:35.204 [2024-11-20 10:34:35.787320] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3494955 ] 00:17:35.204 [2024-11-20 10:34:35.866868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.204 [2024-11-20 10:34:35.916108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.204 [2024-11-20 10:34:35.916216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.204 [2024-11-20 10:34:35.916217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.462 I/O targets: 00:17:35.462 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:35.462 00:17:35.462 00:17:35.462 CUnit - A unit testing framework for C - Version 2.1-3 00:17:35.462 http://cunit.sourceforge.net/ 00:17:35.462 00:17:35.462 00:17:35.462 Suite: bdevio tests on: Nvme1n1 00:17:35.462 Test: blockdev write read block ...passed 00:17:35.462 Test: blockdev write zeroes read block ...passed 00:17:35.462 Test: blockdev write zeroes read no split ...passed 00:17:35.721 Test: blockdev write zeroes read split ...passed 00:17:35.721 Test: blockdev write zeroes read split partial ...passed 00:17:35.721 Test: blockdev reset ...[2024-11-20 10:34:36.287732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:35.721 [2024-11-20 10:34:36.287794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b6920 (9): Bad file descriptor 00:17:35.721 [2024-11-20 10:34:36.347539] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:35.721 passed 00:17:35.721 Test: blockdev write read 8 blocks ...passed 00:17:35.721 Test: blockdev write read size > 128k ...passed 00:17:35.721 Test: blockdev write read invalid size ...passed 00:17:35.721 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:35.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:35.721 Test: blockdev write read max offset ...passed 00:17:35.979 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:35.979 Test: blockdev writev readv 8 blocks ...passed 00:17:35.979 Test: blockdev writev readv 30 x 1block ...passed 00:17:35.979 Test: blockdev writev readv block ...passed 00:17:35.979 Test: blockdev writev readv size > 128k ...passed 00:17:35.979 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:35.979 Test: blockdev comparev and writev ...[2024-11-20 10:34:36.561770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.561796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.561810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.561818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.562078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.562088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.562100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.562107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.562340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.562350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.562363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.562370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.562606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.562616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.562627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.979 [2024-11-20 10:34:36.562635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.979 passed 00:17:35.979 Test: blockdev nvme passthru rw ...passed 00:17:35.979 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:34:36.646314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.979 [2024-11-20 10:34:36.646328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.646433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.979 [2024-11-20 10:34:36.646443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.646539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.979 [2024-11-20 10:34:36.646549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.979 [2024-11-20 10:34:36.646650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.979 [2024-11-20 10:34:36.646659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.979 passed 00:17:35.979 Test: blockdev nvme admin passthru ...passed 00:17:35.979 Test: blockdev copy ...passed 00:17:35.979 00:17:35.979 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.979 suites 1 1 n/a 0 0 00:17:35.979 tests 23 23 23 0 0 00:17:35.979 asserts 152 152 152 0 n/a 00:17:35.979 00:17:35.979 Elapsed time = 1.250 seconds 00:17:36.238 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.238 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.238 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.496 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.496 rmmod nvme_tcp 00:17:36.496 rmmod nvme_fabrics 00:17:36.496 rmmod nvme_keyring 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3494724 ']' 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3494724 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3494724 ']' 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3494724 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3494724 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3494724' 00:17:36.496 killing process with pid 3494724 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3494724 00:17:36.496 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3494724 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.755 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.303 00:17:39.303 real 0m10.912s 00:17:39.303 user 0m13.618s 00:17:39.303 sys 0m5.428s 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.303 ************************************ 00:17:39.303 END TEST nvmf_bdevio_no_huge 00:17:39.303 ************************************ 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.303 10:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.303 ************************************ 00:17:39.303 START TEST nvmf_tls 00:17:39.303 ************************************ 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.304 * Looking for test storage... 00:17:39.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.304 --rc genhtml_branch_coverage=1 00:17:39.304 --rc genhtml_function_coverage=1 00:17:39.304 --rc genhtml_legend=1 00:17:39.304 --rc geninfo_all_blocks=1 00:17:39.304 --rc geninfo_unexecuted_blocks=1 00:17:39.304 00:17:39.304 ' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.304 --rc genhtml_branch_coverage=1 00:17:39.304 --rc genhtml_function_coverage=1 00:17:39.304 --rc genhtml_legend=1 00:17:39.304 --rc geninfo_all_blocks=1 00:17:39.304 --rc geninfo_unexecuted_blocks=1 00:17:39.304 00:17:39.304 ' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.304 --rc genhtml_branch_coverage=1 00:17:39.304 --rc genhtml_function_coverage=1 00:17:39.304 --rc genhtml_legend=1 00:17:39.304 --rc geninfo_all_blocks=1 00:17:39.304 --rc geninfo_unexecuted_blocks=1 00:17:39.304 00:17:39.304 ' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.304 --rc genhtml_branch_coverage=1 00:17:39.304 --rc genhtml_function_coverage=1 00:17:39.304 --rc genhtml_legend=1 00:17:39.304 --rc geninfo_all_blocks=1 00:17:39.304 --rc geninfo_unexecuted_blocks=1 00:17:39.304 00:17:39.304 ' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.304 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.305 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:45.889 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:45.889 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:45.889 Found net devices under 0000:86:00.0: cvl_0_0 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.889 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:45.890 Found net devices under 0000:86:00.1: cvl_0_1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:45.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:17:45.890 00:17:45.890 --- 10.0.0.2 ping statistics --- 00:17:45.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.890 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:17:45.890 00:17:45.890 --- 10.0.0.1 ping statistics --- 00:17:45.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.890 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3498715 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3498715 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3498715 ']' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.890 [2024-11-20 10:34:45.759301] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:17:45.890 [2024-11-20 10:34:45.759347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.890 [2024-11-20 10:34:45.838817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.890 [2024-11-20 10:34:45.877896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.890 [2024-11-20 10:34:45.877929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.890 [2024-11-20 10:34:45.877936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.890 [2024-11-20 10:34:45.877942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.890 [2024-11-20 10:34:45.877971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.890 [2024-11-20 10:34:45.878533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:45.890 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:45.890 true 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.890 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:46.150 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:46.150 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:46.150 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:46.409 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.409 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:46.409 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:46.409 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:46.409 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.409 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:46.668 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:46.668 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:46.668 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:46.927 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:46.927 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.186 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:47.186 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:47.186 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:47.186 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:47.186 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.tyeaEkQEMM 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.0ukz7ZHTHC 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tyeaEkQEMM 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.0ukz7ZHTHC 00:17:47.446 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:47.705 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:47.963 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.tyeaEkQEMM 00:17:47.963 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tyeaEkQEMM 00:17:47.963 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.222 [2024-11-20 10:34:48.765764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.222 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.481 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.481 [2024-11-20 10:34:49.158782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.481 [2024-11-20 10:34:49.159020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.481 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.740 malloc0 00:17:48.740 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:48.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tyeaEkQEMM 00:17:49.257 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.257 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tyeaEkQEMM 00:18:01.462 Initializing NVMe Controllers 00:18:01.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.462 Initialization complete. Launching workers. 00:18:01.462 ======================================================== 00:18:01.462 Latency(us) 00:18:01.462 Device Information : IOPS MiB/s Average min max 00:18:01.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16385.16 64.00 3905.84 1573.71 5870.25 00:18:01.462 ======================================================== 00:18:01.462 Total : 16385.16 64.00 3905.84 1573.71 5870.25 00:18:01.462 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyeaEkQEMM 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tyeaEkQEMM 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3501086 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3501086 /var/tmp/bdevperf.sock 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3501086 ']' 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.462 [2024-11-20 10:35:00.104680] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:01.462 [2024-11-20 10:35:00.104732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501086 ] 00:18:01.462 [2024-11-20 10:35:00.179568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.462 [2024-11-20 10:35:00.221692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyeaEkQEMM 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.462 [2024-11-20 10:35:00.676631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.462 TLSTESTn1 00:18:01.462 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.462 Running I/O for 10 seconds... 00:18:02.399 5207.00 IOPS, 20.34 MiB/s [2024-11-20T09:35:04.066Z] 5199.00 IOPS, 20.31 MiB/s [2024-11-20T09:35:05.000Z] 5269.33 IOPS, 20.58 MiB/s [2024-11-20T09:35:05.936Z] 5336.50 IOPS, 20.85 MiB/s [2024-11-20T09:35:06.872Z] 5368.20 IOPS, 20.97 MiB/s [2024-11-20T09:35:08.248Z] 5309.33 IOPS, 20.74 MiB/s [2024-11-20T09:35:09.184Z] 5215.14 IOPS, 20.37 MiB/s [2024-11-20T09:35:10.120Z] 5172.88 IOPS, 20.21 MiB/s [2024-11-20T09:35:11.055Z] 5142.89 IOPS, 20.09 MiB/s [2024-11-20T09:35:11.055Z] 5105.80 IOPS, 19.94 MiB/s 00:18:10.324 Latency(us) 00:18:10.324 [2024-11-20T09:35:11.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.324 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.324 Verification LBA range: start 0x0 length 0x2000 00:18:10.324 TLSTESTn1 : 10.02 5108.69 19.96 0.00 0.00 25018.10 6525.11 31229.33 00:18:10.324 [2024-11-20T09:35:11.055Z] =================================================================================================================== 00:18:10.324 [2024-11-20T09:35:11.055Z] Total : 5108.69 19.96 0.00 0.00 25018.10 6525.11 31229.33 00:18:10.324 { 00:18:10.324 "results": [ 00:18:10.324 { 00:18:10.324 "job": "TLSTESTn1", 00:18:10.324 "core_mask": "0x4", 00:18:10.324 "workload": "verify", 00:18:10.324 "status": "finished", 00:18:10.324 "verify_range": { 00:18:10.324 "start": 0, 00:18:10.324 "length": 8192 00:18:10.324 }, 00:18:10.324 "queue_depth": 128, 00:18:10.324 "io_size": 4096, 00:18:10.324 "runtime": 10.019202, 00:18:10.324 "iops": 5108.690292899574, 00:18:10.324 "mibps": 19.95582145663896, 00:18:10.324 "io_failed": 0, 00:18:10.324 "io_timeout": 0, 00:18:10.324 "avg_latency_us": 25018.102113407887, 00:18:10.324 "min_latency_us": 6525.106086956522, 00:18:10.324 "max_latency_us": 31229.328695652173 00:18:10.324 } 00:18:10.324 ], 00:18:10.324 "core_count": 1 00:18:10.324 } 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3501086 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3501086 ']' 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3501086 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3501086 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3501086' 00:18:10.324 killing process with pid 3501086 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3501086 00:18:10.324 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.324 00:18:10.324 Latency(us) 00:18:10.324 [2024-11-20T09:35:11.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.324 [2024-11-20T09:35:11.055Z] =================================================================================================================== 00:18:10.324 [2024-11-20T09:35:11.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.324 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3501086 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0ukz7ZHTHC 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0ukz7ZHTHC 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0ukz7ZHTHC 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0ukz7ZHTHC 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3502915 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3502915 /var/tmp/bdevperf.sock 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3502915 ']' 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.584 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.584 [2024-11-20 10:35:11.175149] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:10.584 [2024-11-20 10:35:11.175197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502915 ] 00:18:10.584 [2024-11-20 10:35:11.250525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.584 [2024-11-20 10:35:11.292214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.842 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.842 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.842 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0ukz7ZHTHC 00:18:10.842 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.101 [2024-11-20 10:35:11.742860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.101 [2024-11-20 10:35:11.751973] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:11.101 [2024-11-20 10:35:11.752215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe170 (107): Transport endpoint is not connected 00:18:11.101 [2024-11-20 10:35:11.753208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe170 (9): Bad file descriptor 00:18:11.102 [2024-11-20 10:35:11.754210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:11.102 [2024-11-20 10:35:11.754219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:11.102 [2024-11-20 10:35:11.754226] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:11.102 [2024-11-20 10:35:11.754236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:11.102 request: 00:18:11.102 { 00:18:11.102 "name": "TLSTEST", 00:18:11.102 "trtype": "tcp", 00:18:11.102 "traddr": "10.0.0.2", 00:18:11.102 "adrfam": "ipv4", 00:18:11.102 "trsvcid": "4420", 00:18:11.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.102 "prchk_reftag": false, 00:18:11.102 "prchk_guard": false, 00:18:11.102 "hdgst": false, 00:18:11.102 "ddgst": false, 00:18:11.102 "psk": "key0", 00:18:11.102 "allow_unrecognized_csi": false, 00:18:11.102 "method": "bdev_nvme_attach_controller", 00:18:11.102 "req_id": 1 00:18:11.102 } 00:18:11.102 Got JSON-RPC error response 00:18:11.102 response: 00:18:11.102 { 00:18:11.102 "code": -5, 00:18:11.102 "message": "Input/output error" 00:18:11.102 } 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3502915 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3502915 ']' 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3502915 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502915 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502915' 00:18:11.102 killing process with pid 3502915 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3502915 00:18:11.102 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.102 00:18:11.102 Latency(us) 00:18:11.102 [2024-11-20T09:35:11.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.102 [2024-11-20T09:35:11.833Z] =================================================================================================================== 00:18:11.102 [2024-11-20T09:35:11.833Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.102 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3502915 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tyeaEkQEMM 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tyeaEkQEMM 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tyeaEkQEMM 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tyeaEkQEMM 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3502936 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3502936 /var/tmp/bdevperf.sock 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3502936 ']' 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.361 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.361 [2024-11-20 10:35:12.025531] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:11.361 [2024-11-20 10:35:12.025582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502936 ] 00:18:11.620 [2024-11-20 10:35:12.100939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.620 [2024-11-20 10:35:12.138695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.620 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.620 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.620 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyeaEkQEMM 00:18:11.878 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:12.137 [2024-11-20 10:35:12.618356] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.137 [2024-11-20 10:35:12.624098] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:12.137 [2024-11-20 10:35:12.624121] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:12.137 [2024-11-20 10:35:12.624145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.137 [2024-11-20 10:35:12.624740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158170 (107): Transport endpoint is not connected 00:18:12.137 [2024-11-20 10:35:12.625732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158170 (9): Bad file descriptor 00:18:12.137 [2024-11-20 10:35:12.626734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:12.137 [2024-11-20 10:35:12.626745] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:12.138 [2024-11-20 10:35:12.626752] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:12.138 [2024-11-20 10:35:12.626763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:12.138 request: 00:18:12.138 { 00:18:12.138 "name": "TLSTEST", 00:18:12.138 "trtype": "tcp", 00:18:12.138 "traddr": "10.0.0.2", 00:18:12.138 "adrfam": "ipv4", 00:18:12.138 "trsvcid": "4420", 00:18:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.138 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:12.138 "prchk_reftag": false, 00:18:12.138 "prchk_guard": false, 00:18:12.138 "hdgst": false, 00:18:12.138 "ddgst": false, 00:18:12.138 "psk": "key0", 00:18:12.138 "allow_unrecognized_csi": false, 00:18:12.138 "method": "bdev_nvme_attach_controller", 00:18:12.138 "req_id": 1 00:18:12.138 } 00:18:12.138 Got JSON-RPC error response 00:18:12.138 response: 00:18:12.138 { 00:18:12.138 "code": -5, 00:18:12.138 "message": "Input/output error" 00:18:12.138 } 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3502936 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3502936 ']' 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3502936 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502936 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502936' 00:18:12.138 killing process with pid 3502936 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3502936 00:18:12.138 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.138 00:18:12.138 Latency(us) 00:18:12.138 [2024-11-20T09:35:12.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.138 [2024-11-20T09:35:12.869Z] =================================================================================================================== 00:18:12.138 [2024-11-20T09:35:12.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3502936 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyeaEkQEMM 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyeaEkQEMM 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyeaEkQEMM 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tyeaEkQEMM 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3503172 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3503172 /var/tmp/bdevperf.sock 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3503172 ']' 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.138 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.397 [2024-11-20 10:35:12.888219] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:12.397 [2024-11-20 10:35:12.888270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503172 ] 00:18:12.397 [2024-11-20 10:35:12.953229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.397 [2024-11-20 10:35:12.990701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.397 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.397 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.397 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyeaEkQEMM 00:18:12.655 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.915 [2024-11-20 10:35:13.445135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.915 [2024-11-20 10:35:13.456201] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:12.915 [2024-11-20 10:35:13.456221] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:12.915 [2024-11-20 10:35:13.456243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.915 [2024-11-20 10:35:13.456597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53170 (107): Transport endpoint is not connected 00:18:12.915 [2024-11-20 10:35:13.457590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53170 (9): Bad file descriptor 00:18:12.915 [2024-11-20 10:35:13.458591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:12.915 [2024-11-20 10:35:13.458600] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:12.915 [2024-11-20 10:35:13.458607] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:12.915 [2024-11-20 10:35:13.458617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:12.915 request: 00:18:12.915 { 00:18:12.915 "name": "TLSTEST", 00:18:12.915 "trtype": "tcp", 00:18:12.915 "traddr": "10.0.0.2", 00:18:12.915 "adrfam": "ipv4", 00:18:12.915 "trsvcid": "4420", 00:18:12.915 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:12.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.915 "prchk_reftag": false, 00:18:12.915 "prchk_guard": false, 00:18:12.915 "hdgst": false, 00:18:12.915 "ddgst": false, 00:18:12.915 "psk": "key0", 00:18:12.915 "allow_unrecognized_csi": false, 00:18:12.915 "method": "bdev_nvme_attach_controller", 00:18:12.915 "req_id": 1 00:18:12.915 } 00:18:12.915 Got JSON-RPC error response 00:18:12.915 response: 00:18:12.915 { 00:18:12.915 "code": -5, 00:18:12.915 "message": "Input/output error" 00:18:12.915 } 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3503172 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3503172 ']' 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3503172 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3503172 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3503172' 00:18:12.915 killing process with pid 3503172 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3503172 00:18:12.915 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.915 00:18:12.915 Latency(us) 00:18:12.915 [2024-11-20T09:35:13.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.915 [2024-11-20T09:35:13.646Z] =================================================================================================================== 00:18:12.915 [2024-11-20T09:35:13.646Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.915 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3503172 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3503270 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3503270 /var/tmp/bdevperf.sock 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3503270 ']' 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.174 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.174 [2024-11-20 10:35:13.737393] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:13.174 [2024-11-20 10:35:13.737446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503270 ] 00:18:13.174 [2024-11-20 10:35:13.812810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.174 [2024-11-20 10:35:13.853086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.433 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.433 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.433 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:13.433 [2024-11-20 10:35:14.108086] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:13.433 [2024-11-20 10:35:14.108122] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:13.433 request: 00:18:13.433 { 00:18:13.433 "name": "key0", 00:18:13.433 "path": "", 00:18:13.433 "method": "keyring_file_add_key", 00:18:13.433 "req_id": 1 00:18:13.433 } 00:18:13.433 Got JSON-RPC error response 00:18:13.433 response: 00:18:13.433 { 00:18:13.433 "code": -1, 00:18:13.433 "message": "Operation not permitted" 00:18:13.433 } 00:18:13.433 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.692 [2024-11-20 10:35:14.312709] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.692 [2024-11-20 10:35:14.312738] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:13.692 request: 00:18:13.692 { 00:18:13.692 "name": "TLSTEST", 00:18:13.692 "trtype": "tcp", 00:18:13.692 "traddr": "10.0.0.2", 00:18:13.692 "adrfam": "ipv4", 00:18:13.692 "trsvcid": "4420", 00:18:13.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.692 "prchk_reftag": false, 00:18:13.692 "prchk_guard": false, 00:18:13.692 "hdgst": false, 00:18:13.692 "ddgst": false, 00:18:13.692 "psk": "key0", 00:18:13.692 "allow_unrecognized_csi": false, 00:18:13.692 "method": "bdev_nvme_attach_controller", 00:18:13.692 "req_id": 1 00:18:13.692 } 00:18:13.692 Got JSON-RPC error response 00:18:13.692 response: 00:18:13.692 { 00:18:13.692 "code": -126, 00:18:13.692 "message": "Required key not available" 00:18:13.692 } 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3503270 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3503270 ']' 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3503270 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3503270 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3503270' 00:18:13.692 killing process with pid 3503270 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3503270 00:18:13.692 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.692 00:18:13.692 Latency(us) 00:18:13.692 [2024-11-20T09:35:14.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.692 [2024-11-20T09:35:14.423Z] =================================================================================================================== 00:18:13.692 [2024-11-20T09:35:14.423Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.692 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3503270 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3498715 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3498715 ']' 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3498715 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3498715 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3498715' 00:18:13.951 killing process with pid 3498715 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3498715 00:18:13.951 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3498715 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lsJe3HrWob 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lsJe3HrWob 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3503435 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3503435 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3503435 ']' 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.211 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.211 [2024-11-20 10:35:14.847326] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:14.211 [2024-11-20 10:35:14.847377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.211 [2024-11-20 10:35:14.927845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.470 [2024-11-20 10:35:14.969214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.470 [2024-11-20 10:35:14.969248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.470 [2024-11-20 10:35:14.969256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.470 [2024-11-20 10:35:14.969263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.470 [2024-11-20 10:35:14.969268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.470 [2024-11-20 10:35:14.969843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.470 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.470 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lsJe3HrWob 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lsJe3HrWob 00:18:14.471 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:14.730 [2024-11-20 10:35:15.281711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.730 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.989 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.989 [2024-11-20 10:35:15.682745] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.989 [2024-11-20 10:35:15.682982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.989 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.247 malloc0 00:18:15.247 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.543 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lsJe3HrWob 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lsJe3HrWob 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3503777 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3503777 /var/tmp/bdevperf.sock 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3503777 ']' 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.892 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.892 [2024-11-20 10:35:16.531455] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:15.892 [2024-11-20 10:35:16.531505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503777 ] 00:18:16.172 [2024-11-20 10:35:16.608468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.172 [2024-11-20 10:35:16.651616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.172 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.172 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.172 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:16.432 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.432 [2024-11-20 10:35:17.110426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.692 TLSTESTn1 00:18:16.692 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:16.692 Running I/O for 10 seconds... 00:18:19.005 5285.00 IOPS, 20.64 MiB/s [2024-11-20T09:35:20.317Z] 5330.00 IOPS, 20.82 MiB/s [2024-11-20T09:35:21.693Z] 5410.67 IOPS, 21.14 MiB/s [2024-11-20T09:35:22.628Z] 5418.50 IOPS, 21.17 MiB/s [2024-11-20T09:35:23.565Z] 5436.20 IOPS, 21.24 MiB/s [2024-11-20T09:35:24.502Z] 5434.00 IOPS, 21.23 MiB/s [2024-11-20T09:35:25.439Z] 5450.00 IOPS, 21.29 MiB/s [2024-11-20T09:35:26.374Z] 5445.75 IOPS, 21.27 MiB/s [2024-11-20T09:35:27.751Z] 5453.11 IOPS, 21.30 MiB/s [2024-11-20T09:35:27.751Z] 5418.30 IOPS, 21.17 MiB/s 00:18:27.020 Latency(us) 00:18:27.020 [2024-11-20T09:35:27.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.020 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.020 Verification LBA range: start 0x0 length 0x2000 00:18:27.020 TLSTESTn1 : 10.02 5421.60 21.18 0.00 0.00 23571.36 6468.12 24732.72 00:18:27.020 [2024-11-20T09:35:27.751Z] =================================================================================================================== 00:18:27.020 [2024-11-20T09:35:27.751Z] Total : 5421.60 21.18 0.00 0.00 23571.36 6468.12 24732.72 00:18:27.020 { 00:18:27.020 "results": [ 00:18:27.020 { 00:18:27.020 "job": "TLSTESTn1", 00:18:27.020 "core_mask": "0x4", 00:18:27.020 "workload": "verify", 00:18:27.020 "status": "finished", 00:18:27.020 "verify_range": { 00:18:27.020 "start": 0, 00:18:27.020 "length": 8192 00:18:27.020 }, 00:18:27.020 "queue_depth": 128, 00:18:27.020 "io_size": 4096, 00:18:27.020 "runtime": 10.017339, 00:18:27.020 "iops": 5421.599488646636, 00:18:27.020 "mibps": 21.178123002525922, 00:18:27.020 "io_failed": 0, 00:18:27.020 "io_timeout": 0, 00:18:27.020 "avg_latency_us": 23571.359767133923, 00:18:27.020 "min_latency_us": 6468.118260869565, 00:18:27.020 "max_latency_us": 24732.71652173913 00:18:27.020 } 00:18:27.020 ], 00:18:27.020 "core_count": 1 00:18:27.020 } 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3503777 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3503777 ']' 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3503777 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3503777 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3503777' 00:18:27.020 killing process with pid 3503777 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3503777 00:18:27.020 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.020 00:18:27.020 Latency(us) 00:18:27.020 [2024-11-20T09:35:27.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.020 [2024-11-20T09:35:27.751Z] =================================================================================================================== 00:18:27.020 [2024-11-20T09:35:27.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3503777 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lsJe3HrWob 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lsJe3HrWob 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lsJe3HrWob 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.020 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lsJe3HrWob 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lsJe3HrWob 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3505529 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3505529 /var/tmp/bdevperf.sock 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3505529 ']' 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.021 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.021 [2024-11-20 10:35:27.622940] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:27.021 [2024-11-20 10:35:27.622997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505529 ] 00:18:27.021 [2024-11-20 10:35:27.699951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.021 [2024-11-20 10:35:27.739127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.279 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.279 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.279 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:27.538 [2024-11-20 10:35:28.018075] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lsJe3HrWob': 0100666 00:18:27.539 [2024-11-20 10:35:28.018106] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:27.539 request: 00:18:27.539 { 00:18:27.539 "name": "key0", 00:18:27.539 "path": "/tmp/tmp.lsJe3HrWob", 00:18:27.539 "method": "keyring_file_add_key", 00:18:27.539 "req_id": 1 00:18:27.539 } 00:18:27.539 Got JSON-RPC error response 00:18:27.539 response: 00:18:27.539 { 00:18:27.539 "code": -1, 00:18:27.539 "message": "Operation not permitted" 00:18:27.539 } 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.539 [2024-11-20 10:35:28.218683] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.539 [2024-11-20 10:35:28.218709] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:27.539 request: 00:18:27.539 { 00:18:27.539 "name": "TLSTEST", 00:18:27.539 "trtype": "tcp", 00:18:27.539 "traddr": "10.0.0.2", 00:18:27.539 "adrfam": "ipv4", 00:18:27.539 "trsvcid": "4420", 00:18:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.539 "prchk_reftag": false, 00:18:27.539 "prchk_guard": false, 00:18:27.539 "hdgst": false, 00:18:27.539 "ddgst": false, 00:18:27.539 "psk": "key0", 00:18:27.539 "allow_unrecognized_csi": false, 00:18:27.539 "method": "bdev_nvme_attach_controller", 00:18:27.539 "req_id": 1 00:18:27.539 } 00:18:27.539 Got JSON-RPC error response 00:18:27.539 response: 00:18:27.539 { 00:18:27.539 "code": -126, 00:18:27.539 "message": "Required key not available" 00:18:27.539 } 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3505529 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3505529 ']' 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3505529 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.539 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505529 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505529' 00:18:27.798 killing process with pid 3505529 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3505529 00:18:27.798 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.798 00:18:27.798 Latency(us) 00:18:27.798 [2024-11-20T09:35:28.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.798 [2024-11-20T09:35:28.529Z] =================================================================================================================== 00:18:27.798 [2024-11-20T09:35:28.529Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3505529 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3503435 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3503435 ']' 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3503435 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3503435 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3503435' 00:18:27.798 killing process with pid 3503435 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3503435 00:18:27.798 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3503435 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3505773 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3505773 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3505773 ']' 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.057 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.057 [2024-11-20 10:35:28.730317] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:28.057 [2024-11-20 10:35:28.730365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.316 [2024-11-20 10:35:28.810427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.316 [2024-11-20 10:35:28.846367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.316 [2024-11-20 10:35:28.846401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.316 [2024-11-20 10:35:28.846409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.316 [2024-11-20 10:35:28.846415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.316 [2024-11-20 10:35:28.846420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.316 [2024-11-20 10:35:28.846906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lsJe3HrWob 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lsJe3HrWob 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.lsJe3HrWob 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lsJe3HrWob 00:18:28.316 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.580 [2024-11-20 10:35:29.166631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.580 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.843 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.843 [2024-11-20 10:35:29.567670] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.843 [2024-11-20 10:35:29.567884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.101 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.101 malloc0 00:18:29.101 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.359 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:29.622 [2024-11-20 10:35:30.161198] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lsJe3HrWob': 0100666 00:18:29.622 [2024-11-20 10:35:30.161229] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:29.622 request: 00:18:29.622 { 00:18:29.622 "name": "key0", 00:18:29.622 "path": "/tmp/tmp.lsJe3HrWob", 00:18:29.622 "method": "keyring_file_add_key", 00:18:29.622 "req_id": 1 00:18:29.622 } 00:18:29.622 Got JSON-RPC error response 00:18:29.622 response: 00:18:29.622 { 00:18:29.622 "code": -1, 00:18:29.622 "message": "Operation not permitted" 00:18:29.622 } 00:18:29.622 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.883 [2024-11-20 10:35:30.369764] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:29.883 [2024-11-20 10:35:30.369798] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:29.883 request: 00:18:29.883 { 00:18:29.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.883 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.883 "psk": "key0", 00:18:29.883 "method": "nvmf_subsystem_add_host", 00:18:29.883 "req_id": 1 00:18:29.883 } 00:18:29.883 Got JSON-RPC error response 00:18:29.883 response: 00:18:29.883 { 00:18:29.883 "code": -32603, 00:18:29.883 "message": "Internal error" 00:18:29.883 } 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3505773 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3505773 ']' 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3505773 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505773 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505773' 00:18:29.883 killing process with pid 3505773 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3505773 00:18:29.883 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3505773 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lsJe3HrWob 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3506095 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3506095 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3506095 ']' 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.143 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.143 [2024-11-20 10:35:30.681677] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:30.143 [2024-11-20 10:35:30.681727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.143 [2024-11-20 10:35:30.762967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.143 [2024-11-20 10:35:30.801623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.143 [2024-11-20 10:35:30.801658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.143 [2024-11-20 10:35:30.801665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.143 [2024-11-20 10:35:30.801671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.143 [2024-11-20 10:35:30.801675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.143 [2024-11-20 10:35:30.802242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.401 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lsJe3HrWob 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lsJe3HrWob 00:18:30.402 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.402 [2024-11-20 10:35:31.113514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.660 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.660 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.919 [2024-11-20 10:35:31.506520] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.919 [2024-11-20 10:35:31.506721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.919 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.177 malloc0 00:18:31.177 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.436 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:31.436 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3506510 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3506510 /var/tmp/bdevperf.sock 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3506510 ']' 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.695 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.695 [2024-11-20 10:35:32.366007] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:31.695 [2024-11-20 10:35:32.366055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506510 ] 00:18:31.954 [2024-11-20 10:35:32.444738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.954 [2024-11-20 10:35:32.485106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.954 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.954 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.954 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:32.213 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.472 [2024-11-20 10:35:32.943601] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.472 TLSTESTn1 00:18:32.472 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:32.731 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:32.731 "subsystems": [ 00:18:32.731 { 00:18:32.731 "subsystem": "keyring", 00:18:32.731 "config": [ 00:18:32.731 { 00:18:32.731 "method": "keyring_file_add_key", 00:18:32.731 "params": { 00:18:32.731 "name": "key0", 00:18:32.731 "path": "/tmp/tmp.lsJe3HrWob" 00:18:32.731 } 00:18:32.731 } 00:18:32.731 ] 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "subsystem": "iobuf", 00:18:32.731 "config": [ 00:18:32.731 { 00:18:32.731 "method": "iobuf_set_options", 00:18:32.731 "params": { 00:18:32.731 "small_pool_count": 8192, 00:18:32.731 "large_pool_count": 1024, 00:18:32.731 "small_bufsize": 8192, 00:18:32.731 "large_bufsize": 135168, 00:18:32.731 "enable_numa": false 00:18:32.731 } 00:18:32.731 } 00:18:32.731 ] 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "subsystem": "sock", 00:18:32.731 "config": [ 00:18:32.731 { 00:18:32.731 "method": "sock_set_default_impl", 00:18:32.731 "params": { 00:18:32.731 "impl_name": "posix" 00:18:32.731 } 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "method": "sock_impl_set_options", 00:18:32.731 "params": { 00:18:32.731 "impl_name": "ssl", 00:18:32.731 "recv_buf_size": 4096, 00:18:32.731 "send_buf_size": 4096, 00:18:32.731 "enable_recv_pipe": true, 00:18:32.731 "enable_quickack": false, 00:18:32.731 "enable_placement_id": 0, 00:18:32.731 "enable_zerocopy_send_server": true, 00:18:32.731 "enable_zerocopy_send_client": false, 00:18:32.731 "zerocopy_threshold": 0, 00:18:32.731 "tls_version": 0, 00:18:32.731 "enable_ktls": false 00:18:32.731 } 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "method": "sock_impl_set_options", 00:18:32.731 "params": { 00:18:32.731 "impl_name": "posix", 00:18:32.731 "recv_buf_size": 2097152, 00:18:32.731 "send_buf_size": 2097152, 00:18:32.731 "enable_recv_pipe": true, 00:18:32.731 "enable_quickack": false, 00:18:32.731 "enable_placement_id": 0, 00:18:32.731 "enable_zerocopy_send_server": true, 00:18:32.731 "enable_zerocopy_send_client": false, 00:18:32.731 "zerocopy_threshold": 0, 00:18:32.731 "tls_version": 0, 00:18:32.731 "enable_ktls": false 00:18:32.731 } 00:18:32.731 } 00:18:32.731 ] 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "subsystem": "vmd", 00:18:32.731 "config": [] 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "subsystem": "accel", 00:18:32.731 "config": [ 00:18:32.731 { 00:18:32.731 "method": "accel_set_options", 00:18:32.731 "params": { 00:18:32.731 "small_cache_size": 128, 00:18:32.731 "large_cache_size": 16, 00:18:32.731 "task_count": 2048, 00:18:32.731 "sequence_count": 2048, 00:18:32.731 "buf_count": 2048 00:18:32.731 } 00:18:32.731 } 00:18:32.731 ] 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "subsystem": "bdev", 00:18:32.731 "config": [ 00:18:32.731 { 00:18:32.731 "method": "bdev_set_options", 00:18:32.731 "params": { 00:18:32.731 "bdev_io_pool_size": 65535, 00:18:32.731 "bdev_io_cache_size": 256, 00:18:32.731 "bdev_auto_examine": true, 00:18:32.731 "iobuf_small_cache_size": 128, 00:18:32.731 "iobuf_large_cache_size": 16 00:18:32.731 } 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "method": "bdev_raid_set_options", 00:18:32.731 "params": { 00:18:32.731 "process_window_size_kb": 1024, 00:18:32.731 "process_max_bandwidth_mb_sec": 0 00:18:32.731 } 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "method": "bdev_iscsi_set_options", 00:18:32.731 "params": { 00:18:32.731 "timeout_sec": 30 00:18:32.731 } 00:18:32.731 }, 00:18:32.731 { 00:18:32.731 "method": "bdev_nvme_set_options", 00:18:32.731 "params": { 00:18:32.731 "action_on_timeout": "none", 00:18:32.731 "timeout_us": 0, 00:18:32.731 "timeout_admin_us": 0, 00:18:32.731 "keep_alive_timeout_ms": 10000, 00:18:32.731 "arbitration_burst": 0, 00:18:32.731 "low_priority_weight": 0, 00:18:32.731 "medium_priority_weight": 0, 00:18:32.731 "high_priority_weight": 0, 00:18:32.731 "nvme_adminq_poll_period_us": 10000, 00:18:32.731 "nvme_ioq_poll_period_us": 0, 00:18:32.731 "io_queue_requests": 0, 00:18:32.731 "delay_cmd_submit": true, 00:18:32.731 "transport_retry_count": 4, 00:18:32.731 "bdev_retry_count": 3, 00:18:32.731 "transport_ack_timeout": 0, 00:18:32.731 "ctrlr_loss_timeout_sec": 0, 00:18:32.731 "reconnect_delay_sec": 0, 00:18:32.731 "fast_io_fail_timeout_sec": 0, 00:18:32.731 "disable_auto_failback": false, 00:18:32.731 "generate_uuids": false, 00:18:32.731 "transport_tos": 0, 00:18:32.731 "nvme_error_stat": false, 00:18:32.731 "rdma_srq_size": 0, 00:18:32.732 "io_path_stat": false, 00:18:32.732 "allow_accel_sequence": false, 00:18:32.732 "rdma_max_cq_size": 0, 00:18:32.732 "rdma_cm_event_timeout_ms": 0, 00:18:32.732 "dhchap_digests": [ 00:18:32.732 "sha256", 00:18:32.732 "sha384", 00:18:32.732 "sha512" 00:18:32.732 ], 00:18:32.732 "dhchap_dhgroups": [ 00:18:32.732 "null", 00:18:32.732 "ffdhe2048", 00:18:32.732 "ffdhe3072", 00:18:32.732 "ffdhe4096", 00:18:32.732 "ffdhe6144", 00:18:32.732 "ffdhe8192" 00:18:32.732 ] 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "bdev_nvme_set_hotplug", 00:18:32.732 "params": { 00:18:32.732 "period_us": 100000, 00:18:32.732 "enable": false 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "bdev_malloc_create", 00:18:32.732 "params": { 00:18:32.732 "name": "malloc0", 00:18:32.732 "num_blocks": 8192, 00:18:32.732 "block_size": 4096, 00:18:32.732 "physical_block_size": 4096, 00:18:32.732 "uuid": "b003405e-fc5d-417d-9511-295d4f9ef5d6", 00:18:32.732 "optimal_io_boundary": 0, 00:18:32.732 "md_size": 0, 00:18:32.732 "dif_type": 0, 00:18:32.732 "dif_is_head_of_md": false, 00:18:32.732 "dif_pi_format": 0 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "bdev_wait_for_examine" 00:18:32.732 } 00:18:32.732 ] 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "subsystem": "nbd", 00:18:32.732 "config": [] 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "subsystem": "scheduler", 00:18:32.732 "config": [ 00:18:32.732 { 00:18:32.732 "method": "framework_set_scheduler", 00:18:32.732 "params": { 00:18:32.732 "name": "static" 00:18:32.732 } 00:18:32.732 } 00:18:32.732 ] 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "subsystem": "nvmf", 00:18:32.732 "config": [ 00:18:32.732 { 00:18:32.732 "method": "nvmf_set_config", 00:18:32.732 "params": { 00:18:32.732 "discovery_filter": "match_any", 00:18:32.732 "admin_cmd_passthru": { 00:18:32.732 "identify_ctrlr": false 00:18:32.732 }, 00:18:32.732 "dhchap_digests": [ 00:18:32.732 "sha256", 00:18:32.732 "sha384", 00:18:32.732 "sha512" 00:18:32.732 ], 00:18:32.732 "dhchap_dhgroups": [ 00:18:32.732 "null", 00:18:32.732 "ffdhe2048", 00:18:32.732 "ffdhe3072", 00:18:32.732 "ffdhe4096", 00:18:32.732 "ffdhe6144", 00:18:32.732 "ffdhe8192" 00:18:32.732 ] 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_set_max_subsystems", 00:18:32.732 "params": { 00:18:32.732 "max_subsystems": 1024 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_set_crdt", 00:18:32.732 "params": { 00:18:32.732 "crdt1": 0, 00:18:32.732 "crdt2": 0, 00:18:32.732 "crdt3": 0 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_create_transport", 00:18:32.732 "params": { 00:18:32.732 "trtype": "TCP", 00:18:32.732 "max_queue_depth": 128, 00:18:32.732 "max_io_qpairs_per_ctrlr": 127, 00:18:32.732 "in_capsule_data_size": 4096, 00:18:32.732 "max_io_size": 131072, 00:18:32.732 "io_unit_size": 131072, 00:18:32.732 "max_aq_depth": 128, 00:18:32.732 "num_shared_buffers": 511, 00:18:32.732 "buf_cache_size": 4294967295, 00:18:32.732 "dif_insert_or_strip": false, 00:18:32.732 "zcopy": false, 00:18:32.732 "c2h_success": false, 00:18:32.732 "sock_priority": 0, 00:18:32.732 "abort_timeout_sec": 1, 00:18:32.732 "ack_timeout": 0, 00:18:32.732 "data_wr_pool_size": 0 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_create_subsystem", 00:18:32.732 "params": { 00:18:32.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.732 "allow_any_host": false, 00:18:32.732 "serial_number": "SPDK00000000000001", 00:18:32.732 "model_number": "SPDK bdev Controller", 00:18:32.732 "max_namespaces": 10, 00:18:32.732 "min_cntlid": 1, 00:18:32.732 "max_cntlid": 65519, 00:18:32.732 "ana_reporting": false 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_subsystem_add_host", 00:18:32.732 "params": { 00:18:32.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.732 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.732 "psk": "key0" 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_subsystem_add_ns", 00:18:32.732 "params": { 00:18:32.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.732 "namespace": { 00:18:32.732 "nsid": 1, 00:18:32.732 "bdev_name": "malloc0", 00:18:32.732 "nguid": "B003405EFC5D417D9511295D4F9EF5D6", 00:18:32.732 "uuid": "b003405e-fc5d-417d-9511-295d4f9ef5d6", 00:18:32.732 "no_auto_visible": false 00:18:32.732 } 00:18:32.732 } 00:18:32.732 }, 00:18:32.732 { 00:18:32.732 "method": "nvmf_subsystem_add_listener", 00:18:32.732 "params": { 00:18:32.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.732 "listen_address": { 00:18:32.732 "trtype": "TCP", 00:18:32.732 "adrfam": "IPv4", 00:18:32.732 "traddr": "10.0.0.2", 00:18:32.732 "trsvcid": "4420" 00:18:32.732 }, 00:18:32.732 "secure_channel": true 00:18:32.732 } 00:18:32.732 } 00:18:32.732 ] 00:18:32.732 } 00:18:32.732 ] 00:18:32.732 }' 00:18:32.732 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:32.992 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:32.992 "subsystems": [ 00:18:32.992 { 00:18:32.992 "subsystem": "keyring", 00:18:32.992 "config": [ 00:18:32.992 { 00:18:32.992 "method": "keyring_file_add_key", 00:18:32.992 "params": { 00:18:32.992 "name": "key0", 00:18:32.992 "path": "/tmp/tmp.lsJe3HrWob" 00:18:32.992 } 00:18:32.992 } 00:18:32.992 ] 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "subsystem": "iobuf", 00:18:32.992 "config": [ 00:18:32.992 { 00:18:32.992 "method": "iobuf_set_options", 00:18:32.992 "params": { 00:18:32.992 "small_pool_count": 8192, 00:18:32.992 "large_pool_count": 1024, 00:18:32.992 "small_bufsize": 8192, 00:18:32.992 "large_bufsize": 135168, 00:18:32.992 "enable_numa": false 00:18:32.992 } 00:18:32.992 } 00:18:32.992 ] 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "subsystem": "sock", 00:18:32.992 "config": [ 00:18:32.992 { 00:18:32.992 "method": "sock_set_default_impl", 00:18:32.992 "params": { 00:18:32.992 "impl_name": "posix" 00:18:32.992 } 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "method": "sock_impl_set_options", 00:18:32.992 "params": { 00:18:32.992 "impl_name": "ssl", 00:18:32.992 "recv_buf_size": 4096, 00:18:32.992 "send_buf_size": 4096, 00:18:32.992 "enable_recv_pipe": true, 00:18:32.992 "enable_quickack": false, 00:18:32.992 "enable_placement_id": 0, 00:18:32.992 "enable_zerocopy_send_server": true, 00:18:32.992 "enable_zerocopy_send_client": false, 00:18:32.992 "zerocopy_threshold": 0, 00:18:32.992 "tls_version": 0, 00:18:32.992 "enable_ktls": false 00:18:32.992 } 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "method": "sock_impl_set_options", 00:18:32.992 "params": { 00:18:32.992 "impl_name": "posix", 00:18:32.992 "recv_buf_size": 2097152, 00:18:32.992 "send_buf_size": 2097152, 00:18:32.992 "enable_recv_pipe": true, 00:18:32.992 "enable_quickack": false, 00:18:32.992 "enable_placement_id": 0, 00:18:32.992 "enable_zerocopy_send_server": true, 00:18:32.992 "enable_zerocopy_send_client": false, 00:18:32.992 "zerocopy_threshold": 0, 00:18:32.992 "tls_version": 0, 00:18:32.992 "enable_ktls": false 00:18:32.992 } 00:18:32.992 } 00:18:32.992 ] 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "subsystem": "vmd", 00:18:32.992 "config": [] 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "subsystem": "accel", 00:18:32.992 "config": [ 00:18:32.992 { 00:18:32.992 "method": "accel_set_options", 00:18:32.992 "params": { 00:18:32.992 "small_cache_size": 128, 00:18:32.992 "large_cache_size": 16, 00:18:32.992 "task_count": 2048, 00:18:32.992 "sequence_count": 2048, 00:18:32.992 "buf_count": 2048 00:18:32.992 } 00:18:32.992 } 00:18:32.992 ] 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "subsystem": "bdev", 00:18:32.992 "config": [ 00:18:32.992 { 00:18:32.992 "method": "bdev_set_options", 00:18:32.992 "params": { 00:18:32.992 "bdev_io_pool_size": 65535, 00:18:32.992 "bdev_io_cache_size": 256, 00:18:32.992 "bdev_auto_examine": true, 00:18:32.992 "iobuf_small_cache_size": 128, 00:18:32.992 "iobuf_large_cache_size": 16 00:18:32.992 } 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "method": "bdev_raid_set_options", 00:18:32.992 "params": { 00:18:32.992 "process_window_size_kb": 1024, 00:18:32.992 "process_max_bandwidth_mb_sec": 0 00:18:32.992 } 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "method": "bdev_iscsi_set_options", 00:18:32.992 "params": { 00:18:32.992 "timeout_sec": 30 00:18:32.992 } 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "method": "bdev_nvme_set_options", 00:18:32.992 "params": { 00:18:32.992 "action_on_timeout": "none", 00:18:32.992 "timeout_us": 0, 00:18:32.992 "timeout_admin_us": 0, 00:18:32.992 "keep_alive_timeout_ms": 10000, 00:18:32.992 "arbitration_burst": 0, 00:18:32.992 "low_priority_weight": 0, 00:18:32.992 "medium_priority_weight": 0, 00:18:32.992 "high_priority_weight": 0, 00:18:32.992 "nvme_adminq_poll_period_us": 10000, 00:18:32.992 "nvme_ioq_poll_period_us": 0, 00:18:32.992 "io_queue_requests": 512, 00:18:32.992 "delay_cmd_submit": true, 00:18:32.992 "transport_retry_count": 4, 00:18:32.992 "bdev_retry_count": 3, 00:18:32.992 "transport_ack_timeout": 0, 00:18:32.992 "ctrlr_loss_timeout_sec": 0, 00:18:32.992 "reconnect_delay_sec": 0, 00:18:32.992 "fast_io_fail_timeout_sec": 0, 00:18:32.992 "disable_auto_failback": false, 00:18:32.992 "generate_uuids": false, 00:18:32.992 "transport_tos": 0, 00:18:32.992 "nvme_error_stat": false, 00:18:32.992 "rdma_srq_size": 0, 00:18:32.992 "io_path_stat": false, 00:18:32.992 "allow_accel_sequence": false, 00:18:32.992 "rdma_max_cq_size": 0, 00:18:32.992 "rdma_cm_event_timeout_ms": 0, 00:18:32.993 "dhchap_digests": [ 00:18:32.993 "sha256", 00:18:32.993 "sha384", 00:18:32.993 "sha512" 00:18:32.993 ], 00:18:32.993 "dhchap_dhgroups": [ 00:18:32.993 "null", 00:18:32.993 "ffdhe2048", 00:18:32.993 "ffdhe3072", 00:18:32.993 "ffdhe4096", 00:18:32.993 "ffdhe6144", 00:18:32.993 "ffdhe8192" 00:18:32.993 ] 00:18:32.993 } 00:18:32.993 }, 00:18:32.993 { 00:18:32.993 "method": "bdev_nvme_attach_controller", 00:18:32.993 "params": { 00:18:32.993 "name": "TLSTEST", 00:18:32.993 "trtype": "TCP", 00:18:32.993 "adrfam": "IPv4", 00:18:32.993 "traddr": "10.0.0.2", 00:18:32.993 "trsvcid": "4420", 00:18:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.993 "prchk_reftag": false, 00:18:32.993 "prchk_guard": false, 00:18:32.993 "ctrlr_loss_timeout_sec": 0, 00:18:32.993 "reconnect_delay_sec": 0, 00:18:32.993 "fast_io_fail_timeout_sec": 0, 00:18:32.993 "psk": "key0", 00:18:32.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.993 "hdgst": false, 00:18:32.993 "ddgst": false, 00:18:32.993 "multipath": "multipath" 00:18:32.993 } 00:18:32.993 }, 00:18:32.993 { 00:18:32.993 "method": "bdev_nvme_set_hotplug", 00:18:32.993 "params": { 00:18:32.993 "period_us": 100000, 00:18:32.993 "enable": false 00:18:32.993 } 00:18:32.993 }, 00:18:32.993 { 00:18:32.993 "method": "bdev_wait_for_examine" 00:18:32.993 } 00:18:32.993 ] 00:18:32.993 }, 00:18:32.993 { 00:18:32.993 "subsystem": "nbd", 00:18:32.993 "config": [] 00:18:32.993 } 00:18:32.993 ] 00:18:32.993 }' 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3506510 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3506510 ']' 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3506510 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3506510 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3506510' 00:18:32.993 killing process with pid 3506510 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3506510 00:18:32.993 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.993 00:18:32.993 Latency(us) 00:18:32.993 [2024-11-20T09:35:33.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.993 [2024-11-20T09:35:33.724Z] =================================================================================================================== 00:18:32.993 [2024-11-20T09:35:33.724Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.993 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3506510 00:18:33.251 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3506095 00:18:33.251 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3506095 ']' 00:18:33.251 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3506095 00:18:33.251 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3506095 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3506095' 00:18:33.252 killing process with pid 3506095 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3506095 00:18:33.252 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3506095 00:18:33.511 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:33.511 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.511 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.511 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:33.511 "subsystems": [ 00:18:33.511 { 00:18:33.511 "subsystem": "keyring", 00:18:33.511 "config": [ 00:18:33.511 { 00:18:33.511 "method": "keyring_file_add_key", 00:18:33.511 "params": { 00:18:33.511 "name": "key0", 00:18:33.511 "path": "/tmp/tmp.lsJe3HrWob" 00:18:33.511 } 00:18:33.511 } 00:18:33.511 ] 00:18:33.511 }, 00:18:33.511 { 00:18:33.511 "subsystem": "iobuf", 00:18:33.511 "config": [ 00:18:33.511 { 00:18:33.511 "method": "iobuf_set_options", 00:18:33.511 "params": { 00:18:33.511 "small_pool_count": 8192, 00:18:33.511 "large_pool_count": 1024, 00:18:33.511 "small_bufsize": 8192, 00:18:33.511 "large_bufsize": 135168, 00:18:33.511 "enable_numa": false 00:18:33.511 } 00:18:33.511 } 00:18:33.511 ] 00:18:33.511 }, 00:18:33.511 { 00:18:33.511 "subsystem": "sock", 00:18:33.511 "config": [ 00:18:33.511 { 00:18:33.511 "method": "sock_set_default_impl", 00:18:33.511 "params": { 00:18:33.511 "impl_name": "posix" 00:18:33.511 } 00:18:33.511 }, 00:18:33.511 { 00:18:33.511 "method": "sock_impl_set_options", 00:18:33.511 "params": { 00:18:33.511 "impl_name": "ssl", 00:18:33.512 "recv_buf_size": 4096, 00:18:33.512 "send_buf_size": 4096, 00:18:33.512 "enable_recv_pipe": true, 00:18:33.512 "enable_quickack": false, 00:18:33.512 "enable_placement_id": 0, 00:18:33.512 "enable_zerocopy_send_server": true, 00:18:33.512 "enable_zerocopy_send_client": false, 00:18:33.512 "zerocopy_threshold": 0, 00:18:33.512 "tls_version": 0, 00:18:33.512 "enable_ktls": false 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "sock_impl_set_options", 00:18:33.512 "params": { 00:18:33.512 "impl_name": "posix", 00:18:33.512 "recv_buf_size": 2097152, 00:18:33.512 "send_buf_size": 2097152, 00:18:33.512 "enable_recv_pipe": true, 00:18:33.512 "enable_quickack": false, 00:18:33.512 "enable_placement_id": 0, 00:18:33.512 "enable_zerocopy_send_server": true, 00:18:33.512 "enable_zerocopy_send_client": false, 00:18:33.512 "zerocopy_threshold": 0, 00:18:33.512 "tls_version": 0, 00:18:33.512 "enable_ktls": false 00:18:33.512 } 00:18:33.512 } 00:18:33.512 ] 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "subsystem": "vmd", 00:18:33.512 "config": [] 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "subsystem": "accel", 00:18:33.512 "config": [ 00:18:33.512 { 00:18:33.512 "method": "accel_set_options", 00:18:33.512 "params": { 00:18:33.512 "small_cache_size": 128, 00:18:33.512 "large_cache_size": 16, 00:18:33.512 "task_count": 2048, 00:18:33.512 "sequence_count": 2048, 00:18:33.512 "buf_count": 2048 00:18:33.512 } 00:18:33.512 } 00:18:33.512 ] 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "subsystem": "bdev", 00:18:33.512 "config": [ 00:18:33.512 { 00:18:33.512 "method": "bdev_set_options", 00:18:33.512 "params": { 00:18:33.512 "bdev_io_pool_size": 65535, 00:18:33.512 "bdev_io_cache_size": 256, 00:18:33.512 "bdev_auto_examine": true, 00:18:33.512 "iobuf_small_cache_size": 128, 00:18:33.512 "iobuf_large_cache_size": 16 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "bdev_raid_set_options", 00:18:33.512 "params": { 00:18:33.512 "process_window_size_kb": 1024, 00:18:33.512 "process_max_bandwidth_mb_sec": 0 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "bdev_iscsi_set_options", 00:18:33.512 "params": { 00:18:33.512 "timeout_sec": 30 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "bdev_nvme_set_options", 00:18:33.512 "params": { 00:18:33.512 "action_on_timeout": "none", 00:18:33.512 "timeout_us": 0, 00:18:33.512 "timeout_admin_us": 0, 00:18:33.512 "keep_alive_timeout_ms": 10000, 00:18:33.512 "arbitration_burst": 0, 00:18:33.512 "low_priority_weight": 0, 00:18:33.512 "medium_priority_weight": 0, 00:18:33.512 "high_priority_weight": 0, 00:18:33.512 "nvme_adminq_poll_period_us": 10000, 00:18:33.512 "nvme_ioq_poll_period_us": 0, 00:18:33.512 "io_queue_requests": 0, 00:18:33.512 "delay_cmd_submit": true, 00:18:33.512 "transport_retry_count": 4, 00:18:33.512 "bdev_retry_count": 3, 00:18:33.512 "transport_ack_timeout": 0, 00:18:33.512 "ctrlr_loss_timeout_sec": 0, 00:18:33.512 "reconnect_delay_sec": 0, 00:18:33.512 "fast_io_fail_timeout_sec": 0, 00:18:33.512 "disable_auto_failback": false, 00:18:33.512 "generate_uuids": false, 00:18:33.512 "transport_tos": 0, 00:18:33.512 "nvme_error_stat": false, 00:18:33.512 "rdma_srq_size": 0, 00:18:33.512 "io_path_stat": false, 00:18:33.512 "allow_accel_sequence": false, 00:18:33.512 "rdma_max_cq_size": 0, 00:18:33.512 "rdma_cm_event_timeout_ms": 0, 00:18:33.512 "dhchap_digests": [ 00:18:33.512 "sha256", 00:18:33.512 "sha384", 00:18:33.512 "sha512" 00:18:33.512 ], 00:18:33.512 "dhchap_dhgroups": [ 00:18:33.512 "null", 00:18:33.512 "ffdhe2048", 00:18:33.512 "ffdhe3072", 00:18:33.512 "ffdhe4096", 00:18:33.512 "ffdhe6144", 00:18:33.512 "ffdhe8192" 00:18:33.512 ] 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "bdev_nvme_set_hotplug", 00:18:33.512 "params": { 00:18:33.512 "period_us": 100000, 00:18:33.512 "enable": false 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "bdev_malloc_create", 00:18:33.512 "params": { 00:18:33.512 "name": "malloc0", 00:18:33.512 "num_blocks": 8192, 00:18:33.512 "block_size": 4096, 00:18:33.512 "physical_block_size": 4096, 00:18:33.512 "uuid": "b003405e-fc5d-417d-9511-295d4f9ef5d6", 00:18:33.512 "optimal_io_boundary": 0, 00:18:33.512 "md_size": 0, 00:18:33.512 "dif_type": 0, 00:18:33.512 "dif_is_head_of_md": false, 00:18:33.512 "dif_pi_format": 0 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "bdev_wait_for_examine" 00:18:33.512 } 00:18:33.512 ] 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "subsystem": "nbd", 00:18:33.512 "config": [] 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "subsystem": "scheduler", 00:18:33.512 "config": [ 00:18:33.512 { 00:18:33.512 "method": "framework_set_scheduler", 00:18:33.512 "params": { 00:18:33.512 "name": "static" 00:18:33.512 } 00:18:33.512 } 00:18:33.512 ] 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "subsystem": "nvmf", 00:18:33.512 "config": [ 00:18:33.512 { 00:18:33.512 "method": "nvmf_set_config", 00:18:33.512 "params": { 00:18:33.512 "discovery_filter": "match_any", 00:18:33.512 "admin_cmd_passthru": { 00:18:33.512 "identify_ctrlr": false 00:18:33.512 }, 00:18:33.512 "dhchap_digests": [ 00:18:33.512 "sha256", 00:18:33.512 "sha384", 00:18:33.512 "sha512" 00:18:33.512 ], 00:18:33.512 "dhchap_dhgroups": [ 00:18:33.512 "null", 00:18:33.512 "ffdhe2048", 00:18:33.512 "ffdhe3072", 00:18:33.512 "ffdhe4096", 00:18:33.512 "ffdhe6144", 00:18:33.512 "ffdhe8192" 00:18:33.512 ] 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "nvmf_set_max_subsystems", 00:18:33.512 "params": { 00:18:33.512 "max_subsystems": 1024 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "nvmf_set_crdt", 00:18:33.512 "params": { 00:18:33.512 "crdt1": 0, 00:18:33.512 "crdt2": 0, 00:18:33.512 "crdt3": 0 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "nvmf_create_transport", 00:18:33.512 "params": { 00:18:33.512 "trtype": "TCP", 00:18:33.512 "max_queue_depth": 128, 00:18:33.512 "max_io_qpairs_per_ctrlr": 127, 00:18:33.512 "in_capsule_data_size": 4096, 00:18:33.512 "max_io_size": 131072, 00:18:33.512 "io_unit_size": 131072, 00:18:33.512 "max_aq_depth": 128, 00:18:33.512 "num_shared_buffers": 511, 00:18:33.512 "buf_cache_size": 4294967295, 00:18:33.512 "dif_insert_or_strip": false, 00:18:33.512 "zcopy": false, 00:18:33.512 "c2h_success": false, 00:18:33.512 "sock_priority": 0, 00:18:33.512 "abort_timeout_sec": 1, 00:18:33.512 "ack_timeout": 0, 00:18:33.512 "data_wr_pool_size": 0 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "nvmf_create_subsystem", 00:18:33.512 "params": { 00:18:33.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.512 "allow_any_host": false, 00:18:33.512 "serial_number": "SPDK00000000000001", 00:18:33.512 "model_number": "SPDK bdev Controller", 00:18:33.512 "max_namespaces": 10, 00:18:33.512 "min_cntlid": 1, 00:18:33.512 "max_cntlid": 65519, 00:18:33.512 "ana_reporting": false 00:18:33.512 } 00:18:33.512 }, 00:18:33.512 { 00:18:33.512 "method": "nvmf_subsystem_add_host", 00:18:33.512 "params": { 00:18:33.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.512 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.512 "psk": "key0" 00:18:33.512 } 00:18:33.512 }, 00:18:33.513 { 00:18:33.513 "method": "nvmf_subsystem_add_ns", 00:18:33.513 "params": { 00:18:33.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.513 "namespace": { 00:18:33.513 "nsid": 1, 00:18:33.513 "bdev_name": "malloc0", 00:18:33.513 "nguid": "B003405EFC5D417D9511295D4F9EF5D6", 00:18:33.513 "uuid": "b003405e-fc5d-417d-9511-295d4f9ef5d6", 00:18:33.513 "no_auto_visible": false 00:18:33.513 } 00:18:33.513 } 00:18:33.513 }, 00:18:33.513 { 00:18:33.513 "method": "nvmf_subsystem_add_listener", 00:18:33.513 "params": { 00:18:33.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.513 "listen_address": { 00:18:33.513 "trtype": "TCP", 00:18:33.513 "adrfam": "IPv4", 00:18:33.513 "traddr": "10.0.0.2", 00:18:33.513 "trsvcid": "4420" 00:18:33.513 }, 00:18:33.513 "secure_channel": true 00:18:33.513 } 00:18:33.513 } 00:18:33.513 ] 00:18:33.513 } 00:18:33.513 ] 00:18:33.513 }' 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3506763 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3506763 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3506763 ']' 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.513 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.513 [2024-11-20 10:35:34.072728] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:33.513 [2024-11-20 10:35:34.072776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.513 [2024-11-20 10:35:34.152480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.513 [2024-11-20 10:35:34.192922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.513 [2024-11-20 10:35:34.192962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.513 [2024-11-20 10:35:34.192969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.513 [2024-11-20 10:35:34.192975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.513 [2024-11-20 10:35:34.192980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.513 [2024-11-20 10:35:34.193555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.771 [2024-11-20 10:35:34.405870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.771 [2024-11-20 10:35:34.437896] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.771 [2024-11-20 10:35:34.438100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3506910 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3506910 /var/tmp/bdevperf.sock 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3506910 ']' 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.338 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:34.338 "subsystems": [ 00:18:34.338 { 00:18:34.338 "subsystem": "keyring", 00:18:34.338 "config": [ 00:18:34.338 { 00:18:34.338 "method": "keyring_file_add_key", 00:18:34.338 "params": { 00:18:34.338 "name": "key0", 00:18:34.338 "path": "/tmp/tmp.lsJe3HrWob" 00:18:34.338 } 00:18:34.338 } 00:18:34.338 ] 00:18:34.338 }, 00:18:34.338 { 00:18:34.338 "subsystem": "iobuf", 00:18:34.338 "config": [ 00:18:34.338 { 00:18:34.338 "method": "iobuf_set_options", 00:18:34.338 "params": { 00:18:34.338 "small_pool_count": 8192, 00:18:34.338 "large_pool_count": 1024, 00:18:34.338 "small_bufsize": 8192, 00:18:34.338 "large_bufsize": 135168, 00:18:34.338 "enable_numa": false 00:18:34.338 } 00:18:34.338 } 00:18:34.338 ] 00:18:34.338 }, 00:18:34.338 { 00:18:34.338 "subsystem": "sock", 00:18:34.338 "config": [ 00:18:34.338 { 00:18:34.338 "method": "sock_set_default_impl", 00:18:34.338 "params": { 00:18:34.338 "impl_name": "posix" 00:18:34.338 } 00:18:34.338 }, 00:18:34.338 { 00:18:34.338 "method": "sock_impl_set_options", 00:18:34.338 "params": { 00:18:34.338 "impl_name": "ssl", 00:18:34.338 "recv_buf_size": 4096, 00:18:34.339 "send_buf_size": 4096, 00:18:34.339 "enable_recv_pipe": true, 00:18:34.339 "enable_quickack": false, 00:18:34.339 "enable_placement_id": 0, 00:18:34.339 "enable_zerocopy_send_server": true, 00:18:34.339 "enable_zerocopy_send_client": false, 00:18:34.339 "zerocopy_threshold": 0, 00:18:34.339 "tls_version": 0, 00:18:34.339 "enable_ktls": false 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "sock_impl_set_options", 00:18:34.339 "params": { 00:18:34.339 "impl_name": "posix", 00:18:34.339 "recv_buf_size": 2097152, 00:18:34.339 "send_buf_size": 2097152, 00:18:34.339 "enable_recv_pipe": true, 00:18:34.339 "enable_quickack": false, 00:18:34.339 "enable_placement_id": 0, 00:18:34.339 "enable_zerocopy_send_server": true, 00:18:34.339 "enable_zerocopy_send_client": false, 00:18:34.339 "zerocopy_threshold": 0, 00:18:34.339 "tls_version": 0, 00:18:34.339 "enable_ktls": false 00:18:34.339 } 00:18:34.339 } 00:18:34.339 ] 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "subsystem": "vmd", 00:18:34.339 "config": [] 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "subsystem": "accel", 00:18:34.339 "config": [ 00:18:34.339 { 00:18:34.339 "method": "accel_set_options", 00:18:34.339 "params": { 00:18:34.339 "small_cache_size": 128, 00:18:34.339 "large_cache_size": 16, 00:18:34.339 "task_count": 2048, 00:18:34.339 "sequence_count": 2048, 00:18:34.339 "buf_count": 2048 00:18:34.339 } 00:18:34.339 } 00:18:34.339 ] 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "subsystem": "bdev", 00:18:34.339 "config": [ 00:18:34.339 { 00:18:34.339 "method": "bdev_set_options", 00:18:34.339 "params": { 00:18:34.339 "bdev_io_pool_size": 65535, 00:18:34.339 "bdev_io_cache_size": 256, 00:18:34.339 "bdev_auto_examine": true, 00:18:34.339 "iobuf_small_cache_size": 128, 00:18:34.339 "iobuf_large_cache_size": 16 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "bdev_raid_set_options", 00:18:34.339 "params": { 00:18:34.339 "process_window_size_kb": 1024, 00:18:34.339 "process_max_bandwidth_mb_sec": 0 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "bdev_iscsi_set_options", 00:18:34.339 "params": { 00:18:34.339 "timeout_sec": 30 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "bdev_nvme_set_options", 00:18:34.339 "params": { 00:18:34.339 "action_on_timeout": "none", 00:18:34.339 "timeout_us": 0, 00:18:34.339 "timeout_admin_us": 0, 00:18:34.339 "keep_alive_timeout_ms": 10000, 00:18:34.339 "arbitration_burst": 0, 00:18:34.339 "low_priority_weight": 0, 00:18:34.339 "medium_priority_weight": 0, 00:18:34.339 "high_priority_weight": 0, 00:18:34.339 "nvme_adminq_poll_period_us": 10000, 00:18:34.339 "nvme_ioq_poll_period_us": 0, 00:18:34.339 "io_queue_requests": 512, 00:18:34.339 "delay_cmd_submit": true, 00:18:34.339 "transport_retry_count": 4, 00:18:34.339 "bdev_retry_count": 3, 00:18:34.339 "transport_ack_timeout": 0, 00:18:34.339 "ctrlr_loss_timeout_sec": 0, 00:18:34.339 "reconnect_delay_sec": 0, 00:18:34.339 "fast_io_fail_timeout_sec": 0, 00:18:34.339 "disable_auto_failback": false, 00:18:34.339 "generate_uuids": false, 00:18:34.339 "transport_tos": 0, 00:18:34.339 "nvme_error_stat": false, 00:18:34.339 "rdma_srq_size": 0, 00:18:34.339 "io_path_stat": false, 00:18:34.339 "allow_accel_sequence": false, 00:18:34.339 "rdma_max_cq_size": 0, 00:18:34.339 "rdma_cm_event_timeout_ms": 0, 00:18:34.339 "dhchap_digests": [ 00:18:34.339 "sha256", 00:18:34.339 "sha384", 00:18:34.339 "sha512" 00:18:34.339 ], 00:18:34.339 "dhchap_dhgroups": [ 00:18:34.339 "null", 00:18:34.339 "ffdhe2048", 00:18:34.339 "ffdhe3072", 00:18:34.339 "ffdhe4096", 00:18:34.339 "ffdhe6144", 00:18:34.339 "ffdhe8192" 00:18:34.339 ] 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "bdev_nvme_attach_controller", 00:18:34.339 "params": { 00:18:34.339 "name": "TLSTEST", 00:18:34.339 "trtype": "TCP", 00:18:34.339 "adrfam": "IPv4", 00:18:34.339 "traddr": "10.0.0.2", 00:18:34.339 "trsvcid": "4420", 00:18:34.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.339 "prchk_reftag": false, 00:18:34.339 "prchk_guard": false, 00:18:34.339 "ctrlr_loss_timeout_sec": 0, 00:18:34.339 "reconnect_delay_sec": 0, 00:18:34.339 "fast_io_fail_timeout_sec": 0, 00:18:34.339 "psk": "key0", 00:18:34.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.339 "hdgst": false, 00:18:34.339 "ddgst": false, 00:18:34.339 "multipath": "multipath" 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "bdev_nvme_set_hotplug", 00:18:34.339 "params": { 00:18:34.339 "period_us": 100000, 00:18:34.339 "enable": false 00:18:34.339 } 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "method": "bdev_wait_for_examine" 00:18:34.339 } 00:18:34.339 ] 00:18:34.339 }, 00:18:34.339 { 00:18:34.339 "subsystem": "nbd", 00:18:34.339 "config": [] 00:18:34.339 } 00:18:34.339 ] 00:18:34.339 }' 00:18:34.339 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.339 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.339 [2024-11-20 10:35:34.981879] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:34.339 [2024-11-20 10:35:34.981927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506910 ] 00:18:34.339 [2024-11-20 10:35:35.055418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.598 [2024-11-20 10:35:35.097810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.598 [2024-11-20 10:35:35.250739] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.166 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.166 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.166 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.424 Running I/O for 10 seconds... 00:18:37.296 5419.00 IOPS, 21.17 MiB/s [2024-11-20T09:35:39.023Z] 5405.00 IOPS, 21.11 MiB/s [2024-11-20T09:35:39.959Z] 5440.00 IOPS, 21.25 MiB/s [2024-11-20T09:35:41.335Z] 5448.75 IOPS, 21.28 MiB/s [2024-11-20T09:35:42.272Z] 5444.80 IOPS, 21.27 MiB/s [2024-11-20T09:35:43.208Z] 5457.17 IOPS, 21.32 MiB/s [2024-11-20T09:35:44.145Z] 5459.71 IOPS, 21.33 MiB/s [2024-11-20T09:35:45.081Z] 5462.00 IOPS, 21.34 MiB/s [2024-11-20T09:35:46.017Z] 5456.56 IOPS, 21.31 MiB/s [2024-11-20T09:35:46.017Z] 5460.40 IOPS, 21.33 MiB/s 00:18:45.286 Latency(us) 00:18:45.286 [2024-11-20T09:35:46.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.286 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.286 Verification LBA range: start 0x0 length 0x2000 00:18:45.286 TLSTESTn1 : 10.02 5463.70 21.34 0.00 0.00 23390.27 5755.77 25530.55 00:18:45.286 [2024-11-20T09:35:46.017Z] =================================================================================================================== 00:18:45.286 [2024-11-20T09:35:46.017Z] Total : 5463.70 21.34 0.00 0.00 23390.27 5755.77 25530.55 00:18:45.286 { 00:18:45.286 "results": [ 00:18:45.286 { 00:18:45.286 "job": "TLSTESTn1", 00:18:45.286 "core_mask": "0x4", 00:18:45.286 "workload": "verify", 00:18:45.286 "status": "finished", 00:18:45.286 "verify_range": { 00:18:45.286 "start": 0, 00:18:45.286 "length": 8192 00:18:45.286 }, 00:18:45.286 "queue_depth": 128, 00:18:45.286 "io_size": 4096, 00:18:45.287 "runtime": 10.017021, 00:18:45.287 "iops": 5463.700235828596, 00:18:45.287 "mibps": 21.342579046205454, 00:18:45.287 "io_failed": 0, 00:18:45.287 "io_timeout": 0, 00:18:45.287 "avg_latency_us": 23390.265350312602, 00:18:45.287 "min_latency_us": 5755.770434782608, 00:18:45.287 "max_latency_us": 25530.54608695652 00:18:45.287 } 00:18:45.287 ], 00:18:45.287 "core_count": 1 00:18:45.287 } 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3506910 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3506910 ']' 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3506910 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.287 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3506910 00:18:45.545 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.545 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.545 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3506910' 00:18:45.545 killing process with pid 3506910 00:18:45.545 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3506910 00:18:45.545 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.545 00:18:45.545 Latency(us) 00:18:45.545 [2024-11-20T09:35:46.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.545 [2024-11-20T09:35:46.276Z] =================================================================================================================== 00:18:45.545 [2024-11-20T09:35:46.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3506910 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3506763 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3506763 ']' 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3506763 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3506763 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3506763' 00:18:45.546 killing process with pid 3506763 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3506763 00:18:45.546 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3506763 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3508829 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3508829 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3508829 ']' 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.804 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.804 [2024-11-20 10:35:46.466271] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:45.804 [2024-11-20 10:35:46.466318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.064 [2024-11-20 10:35:46.542823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.064 [2024-11-20 10:35:46.583894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.064 [2024-11-20 10:35:46.583929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.064 [2024-11-20 10:35:46.583937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.064 [2024-11-20 10:35:46.583943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.064 [2024-11-20 10:35:46.583954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.064 [2024-11-20 10:35:46.584529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lsJe3HrWob 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lsJe3HrWob 00:18:46.064 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:46.322 [2024-11-20 10:35:46.884791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.322 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:46.580 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:46.580 [2024-11-20 10:35:47.289841] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.580 [2024-11-20 10:35:47.290077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.838 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:46.838 malloc0 00:18:46.838 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.097 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:47.356 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3509103 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3509103 /var/tmp/bdevperf.sock 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3509103 ']' 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.615 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.615 [2024-11-20 10:35:48.157732] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:47.615 [2024-11-20 10:35:48.157780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509103 ] 00:18:47.615 [2024-11-20 10:35:48.232358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.615 [2024-11-20 10:35:48.273396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.875 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.875 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.875 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:47.875 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:48.133 [2024-11-20 10:35:48.733376] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.133 nvme0n1 00:18:48.133 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.390 Running I/O for 1 seconds... 00:18:49.327 5553.00 IOPS, 21.69 MiB/s 00:18:49.327 Latency(us) 00:18:49.327 [2024-11-20T09:35:50.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.327 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.327 Verification LBA range: start 0x0 length 0x2000 00:18:49.327 nvme0n1 : 1.02 5549.28 21.68 0.00 0.00 22845.90 4986.43 23137.06 00:18:49.327 [2024-11-20T09:35:50.058Z] =================================================================================================================== 00:18:49.327 [2024-11-20T09:35:50.058Z] Total : 5549.28 21.68 0.00 0.00 22845.90 4986.43 23137.06 00:18:49.327 { 00:18:49.327 "results": [ 00:18:49.327 { 00:18:49.327 "job": "nvme0n1", 00:18:49.328 "core_mask": "0x2", 00:18:49.328 "workload": "verify", 00:18:49.328 "status": "finished", 00:18:49.328 "verify_range": { 00:18:49.328 "start": 0, 00:18:49.328 "length": 8192 00:18:49.328 }, 00:18:49.328 "queue_depth": 128, 00:18:49.328 "io_size": 4096, 00:18:49.328 "runtime": 1.023917, 00:18:49.328 "iops": 5549.277919987655, 00:18:49.328 "mibps": 21.676866874951777, 00:18:49.328 "io_failed": 0, 00:18:49.328 "io_timeout": 0, 00:18:49.328 "avg_latency_us": 22845.902455657073, 00:18:49.328 "min_latency_us": 4986.434782608696, 00:18:49.328 "max_latency_us": 23137.057391304348 00:18:49.328 } 00:18:49.328 ], 00:18:49.328 "core_count": 1 00:18:49.328 } 00:18:49.328 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3509103 00:18:49.328 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3509103 ']' 00:18:49.328 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3509103 00:18:49.328 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.328 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.328 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509103 00:18:49.328 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.328 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.328 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509103' 00:18:49.328 killing process with pid 3509103 00:18:49.328 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3509103 00:18:49.328 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.328 00:18:49.328 Latency(us) 00:18:49.328 [2024-11-20T09:35:50.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.328 [2024-11-20T09:35:50.059Z] =================================================================================================================== 00:18:49.328 [2024-11-20T09:35:50.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.328 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3509103 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3508829 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3508829 ']' 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3508829 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3508829 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3508829' 00:18:49.587 killing process with pid 3508829 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3508829 00:18:49.587 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3508829 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3509425 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3509425 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3509425 ']' 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.847 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.847 [2024-11-20 10:35:50.465025] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:49.847 [2024-11-20 10:35:50.465074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.847 [2024-11-20 10:35:50.545087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.106 [2024-11-20 10:35:50.586742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.106 [2024-11-20 10:35:50.586778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.106 [2024-11-20 10:35:50.586785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.106 [2024-11-20 10:35:50.586791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.106 [2024-11-20 10:35:50.586797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.106 [2024-11-20 10:35:50.587396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.106 [2024-11-20 10:35:50.724428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.106 malloc0 00:18:50.106 [2024-11-20 10:35:50.752755] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.106 [2024-11-20 10:35:50.752971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3509596 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3509596 /var/tmp/bdevperf.sock 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3509596 ']' 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.106 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.106 [2024-11-20 10:35:50.826095] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:50.106 [2024-11-20 10:35:50.826137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509596 ] 00:18:50.365 [2024-11-20 10:35:50.899517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.365 [2024-11-20 10:35:50.940409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.365 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.365 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.365 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lsJe3HrWob 00:18:50.623 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:50.882 [2024-11-20 10:35:51.407901] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.882 nvme0n1 00:18:50.882 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.882 Running I/O for 1 seconds... 00:18:52.259 5386.00 IOPS, 21.04 MiB/s 00:18:52.259 Latency(us) 00:18:52.259 [2024-11-20T09:35:52.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.259 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:52.259 Verification LBA range: start 0x0 length 0x2000 00:18:52.259 nvme0n1 : 1.01 5440.68 21.25 0.00 0.00 23361.23 5014.93 21085.50 00:18:52.259 [2024-11-20T09:35:52.990Z] =================================================================================================================== 00:18:52.259 [2024-11-20T09:35:52.990Z] Total : 5440.68 21.25 0.00 0.00 23361.23 5014.93 21085.50 00:18:52.259 { 00:18:52.259 "results": [ 00:18:52.259 { 00:18:52.259 "job": "nvme0n1", 00:18:52.259 "core_mask": "0x2", 00:18:52.259 "workload": "verify", 00:18:52.259 "status": "finished", 00:18:52.259 "verify_range": { 00:18:52.259 "start": 0, 00:18:52.259 "length": 8192 00:18:52.259 }, 00:18:52.259 "queue_depth": 128, 00:18:52.259 "io_size": 4096, 00:18:52.259 "runtime": 1.013476, 00:18:52.259 "iops": 5440.681377753395, 00:18:52.259 "mibps": 21.2526616318492, 00:18:52.259 "io_failed": 0, 00:18:52.259 "io_timeout": 0, 00:18:52.259 "avg_latency_us": 23361.228627209788, 00:18:52.259 "min_latency_us": 5014.928695652174, 00:18:52.259 "max_latency_us": 21085.49565217391 00:18:52.259 } 00:18:52.259 ], 00:18:52.259 "core_count": 1 00:18:52.259 } 00:18:52.259 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:52.259 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.259 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.259 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.259 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:52.259 "subsystems": [ 00:18:52.259 { 00:18:52.259 "subsystem": "keyring", 00:18:52.259 "config": [ 00:18:52.259 { 00:18:52.259 "method": "keyring_file_add_key", 00:18:52.259 "params": { 00:18:52.259 "name": "key0", 00:18:52.259 "path": "/tmp/tmp.lsJe3HrWob" 00:18:52.259 } 00:18:52.259 } 00:18:52.259 ] 00:18:52.259 }, 00:18:52.259 { 00:18:52.259 "subsystem": "iobuf", 00:18:52.259 "config": [ 00:18:52.259 { 00:18:52.259 "method": "iobuf_set_options", 00:18:52.259 "params": { 00:18:52.259 "small_pool_count": 8192, 00:18:52.259 "large_pool_count": 1024, 00:18:52.259 "small_bufsize": 8192, 00:18:52.259 "large_bufsize": 135168, 00:18:52.259 "enable_numa": false 00:18:52.259 } 00:18:52.259 } 00:18:52.259 ] 00:18:52.259 }, 00:18:52.259 { 00:18:52.259 "subsystem": "sock", 00:18:52.259 "config": [ 00:18:52.259 { 00:18:52.260 "method": "sock_set_default_impl", 00:18:52.260 "params": { 00:18:52.260 "impl_name": "posix" 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "sock_impl_set_options", 00:18:52.260 "params": { 00:18:52.260 "impl_name": "ssl", 00:18:52.260 "recv_buf_size": 4096, 00:18:52.260 "send_buf_size": 4096, 00:18:52.260 "enable_recv_pipe": true, 00:18:52.260 "enable_quickack": false, 00:18:52.260 "enable_placement_id": 0, 00:18:52.260 "enable_zerocopy_send_server": true, 00:18:52.260 "enable_zerocopy_send_client": false, 00:18:52.260 "zerocopy_threshold": 0, 00:18:52.260 "tls_version": 0, 00:18:52.260 "enable_ktls": false 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "sock_impl_set_options", 00:18:52.260 "params": { 00:18:52.260 "impl_name": "posix", 00:18:52.260 "recv_buf_size": 2097152, 00:18:52.260 "send_buf_size": 2097152, 00:18:52.260 "enable_recv_pipe": true, 00:18:52.260 "enable_quickack": false, 00:18:52.260 "enable_placement_id": 0, 00:18:52.260 "enable_zerocopy_send_server": true, 00:18:52.260 "enable_zerocopy_send_client": false, 00:18:52.260 "zerocopy_threshold": 0, 00:18:52.260 "tls_version": 0, 00:18:52.260 "enable_ktls": false 00:18:52.260 } 00:18:52.260 } 00:18:52.260 ] 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "subsystem": "vmd", 00:18:52.260 "config": [] 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "subsystem": "accel", 00:18:52.260 "config": [ 00:18:52.260 { 00:18:52.260 "method": "accel_set_options", 00:18:52.260 "params": { 00:18:52.260 "small_cache_size": 128, 00:18:52.260 "large_cache_size": 16, 00:18:52.260 "task_count": 2048, 00:18:52.260 "sequence_count": 2048, 00:18:52.260 "buf_count": 2048 00:18:52.260 } 00:18:52.260 } 00:18:52.260 ] 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "subsystem": "bdev", 00:18:52.260 "config": [ 00:18:52.260 { 00:18:52.260 "method": "bdev_set_options", 00:18:52.260 "params": { 00:18:52.260 "bdev_io_pool_size": 65535, 00:18:52.260 "bdev_io_cache_size": 256, 00:18:52.260 "bdev_auto_examine": true, 00:18:52.260 "iobuf_small_cache_size": 128, 00:18:52.260 "iobuf_large_cache_size": 16 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "bdev_raid_set_options", 00:18:52.260 "params": { 00:18:52.260 "process_window_size_kb": 1024, 00:18:52.260 "process_max_bandwidth_mb_sec": 0 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "bdev_iscsi_set_options", 00:18:52.260 "params": { 00:18:52.260 "timeout_sec": 30 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "bdev_nvme_set_options", 00:18:52.260 "params": { 00:18:52.260 "action_on_timeout": "none", 00:18:52.260 "timeout_us": 0, 00:18:52.260 "timeout_admin_us": 0, 00:18:52.260 "keep_alive_timeout_ms": 10000, 00:18:52.260 "arbitration_burst": 0, 00:18:52.260 "low_priority_weight": 0, 00:18:52.260 "medium_priority_weight": 0, 00:18:52.260 "high_priority_weight": 0, 00:18:52.260 "nvme_adminq_poll_period_us": 10000, 00:18:52.260 "nvme_ioq_poll_period_us": 0, 00:18:52.260 "io_queue_requests": 0, 00:18:52.260 "delay_cmd_submit": true, 00:18:52.260 "transport_retry_count": 4, 00:18:52.260 "bdev_retry_count": 3, 00:18:52.260 "transport_ack_timeout": 0, 00:18:52.260 "ctrlr_loss_timeout_sec": 0, 00:18:52.260 "reconnect_delay_sec": 0, 00:18:52.260 "fast_io_fail_timeout_sec": 0, 00:18:52.260 "disable_auto_failback": false, 00:18:52.260 "generate_uuids": false, 00:18:52.260 "transport_tos": 0, 00:18:52.260 "nvme_error_stat": false, 00:18:52.260 "rdma_srq_size": 0, 00:18:52.260 "io_path_stat": false, 00:18:52.260 "allow_accel_sequence": false, 00:18:52.260 "rdma_max_cq_size": 0, 00:18:52.260 "rdma_cm_event_timeout_ms": 0, 00:18:52.260 "dhchap_digests": [ 00:18:52.260 "sha256", 00:18:52.260 "sha384", 00:18:52.260 "sha512" 00:18:52.260 ], 00:18:52.260 "dhchap_dhgroups": [ 00:18:52.260 "null", 00:18:52.260 "ffdhe2048", 00:18:52.260 "ffdhe3072", 00:18:52.260 "ffdhe4096", 00:18:52.260 "ffdhe6144", 00:18:52.260 "ffdhe8192" 00:18:52.260 ] 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "bdev_nvme_set_hotplug", 00:18:52.260 "params": { 00:18:52.260 "period_us": 100000, 00:18:52.260 "enable": false 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "bdev_malloc_create", 00:18:52.260 "params": { 00:18:52.260 "name": "malloc0", 00:18:52.260 "num_blocks": 8192, 00:18:52.260 "block_size": 4096, 00:18:52.260 "physical_block_size": 4096, 00:18:52.260 "uuid": "f83ea412-cd8d-4037-ad99-c31e04c43739", 00:18:52.260 "optimal_io_boundary": 0, 00:18:52.260 "md_size": 0, 00:18:52.260 "dif_type": 0, 00:18:52.260 "dif_is_head_of_md": false, 00:18:52.260 "dif_pi_format": 0 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "bdev_wait_for_examine" 00:18:52.260 } 00:18:52.260 ] 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "subsystem": "nbd", 00:18:52.260 "config": [] 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "subsystem": "scheduler", 00:18:52.260 "config": [ 00:18:52.260 { 00:18:52.260 "method": "framework_set_scheduler", 00:18:52.260 "params": { 00:18:52.260 "name": "static" 00:18:52.260 } 00:18:52.260 } 00:18:52.260 ] 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "subsystem": "nvmf", 00:18:52.260 "config": [ 00:18:52.260 { 00:18:52.260 "method": "nvmf_set_config", 00:18:52.260 "params": { 00:18:52.260 "discovery_filter": "match_any", 00:18:52.260 "admin_cmd_passthru": { 00:18:52.260 "identify_ctrlr": false 00:18:52.260 }, 00:18:52.260 "dhchap_digests": [ 00:18:52.260 "sha256", 00:18:52.260 "sha384", 00:18:52.260 "sha512" 00:18:52.260 ], 00:18:52.260 "dhchap_dhgroups": [ 00:18:52.260 "null", 00:18:52.260 "ffdhe2048", 00:18:52.260 "ffdhe3072", 00:18:52.260 "ffdhe4096", 00:18:52.260 "ffdhe6144", 00:18:52.260 "ffdhe8192" 00:18:52.260 ] 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "nvmf_set_max_subsystems", 00:18:52.260 "params": { 00:18:52.260 "max_subsystems": 1024 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "nvmf_set_crdt", 00:18:52.260 "params": { 00:18:52.260 "crdt1": 0, 00:18:52.260 "crdt2": 0, 00:18:52.260 "crdt3": 0 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "nvmf_create_transport", 00:18:52.260 "params": { 00:18:52.260 "trtype": "TCP", 00:18:52.260 "max_queue_depth": 128, 00:18:52.260 "max_io_qpairs_per_ctrlr": 127, 00:18:52.260 "in_capsule_data_size": 4096, 00:18:52.260 "max_io_size": 131072, 00:18:52.260 "io_unit_size": 131072, 00:18:52.260 "max_aq_depth": 128, 00:18:52.260 "num_shared_buffers": 511, 00:18:52.260 "buf_cache_size": 4294967295, 00:18:52.260 "dif_insert_or_strip": false, 00:18:52.260 "zcopy": false, 00:18:52.260 "c2h_success": false, 00:18:52.260 "sock_priority": 0, 00:18:52.260 "abort_timeout_sec": 1, 00:18:52.260 "ack_timeout": 0, 00:18:52.260 "data_wr_pool_size": 0 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "nvmf_create_subsystem", 00:18:52.260 "params": { 00:18:52.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.260 "allow_any_host": false, 00:18:52.260 "serial_number": "00000000000000000000", 00:18:52.260 "model_number": "SPDK bdev Controller", 00:18:52.260 "max_namespaces": 32, 00:18:52.260 "min_cntlid": 1, 00:18:52.260 "max_cntlid": 65519, 00:18:52.260 "ana_reporting": false 00:18:52.260 } 00:18:52.260 }, 00:18:52.260 { 00:18:52.260 "method": "nvmf_subsystem_add_host", 00:18:52.260 "params": { 00:18:52.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.260 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.261 "psk": "key0" 00:18:52.261 } 00:18:52.261 }, 00:18:52.261 { 00:18:52.261 "method": "nvmf_subsystem_add_ns", 00:18:52.261 "params": { 00:18:52.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.261 "namespace": { 00:18:52.261 "nsid": 1, 00:18:52.261 "bdev_name": "malloc0", 00:18:52.261 "nguid": "F83EA412CD8D4037AD99C31E04C43739", 00:18:52.261 "uuid": "f83ea412-cd8d-4037-ad99-c31e04c43739", 00:18:52.261 "no_auto_visible": false 00:18:52.261 } 00:18:52.261 } 00:18:52.261 }, 00:18:52.261 { 00:18:52.261 "method": "nvmf_subsystem_add_listener", 00:18:52.261 "params": { 00:18:52.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.261 "listen_address": { 00:18:52.261 "trtype": "TCP", 00:18:52.261 "adrfam": "IPv4", 00:18:52.261 "traddr": "10.0.0.2", 00:18:52.261 "trsvcid": "4420" 00:18:52.261 }, 00:18:52.261 "secure_channel": false, 00:18:52.261 "sock_impl": "ssl" 00:18:52.261 } 00:18:52.261 } 00:18:52.261 ] 00:18:52.261 } 00:18:52.261 ] 00:18:52.261 }' 00:18:52.261 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.520 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:52.520 "subsystems": [ 00:18:52.520 { 00:18:52.520 "subsystem": "keyring", 00:18:52.520 "config": [ 00:18:52.520 { 00:18:52.520 "method": "keyring_file_add_key", 00:18:52.520 "params": { 00:18:52.520 "name": "key0", 00:18:52.520 "path": "/tmp/tmp.lsJe3HrWob" 00:18:52.520 } 00:18:52.520 } 00:18:52.520 ] 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "subsystem": "iobuf", 00:18:52.520 "config": [ 00:18:52.520 { 00:18:52.520 "method": "iobuf_set_options", 00:18:52.520 "params": { 00:18:52.520 "small_pool_count": 8192, 00:18:52.520 "large_pool_count": 1024, 00:18:52.520 "small_bufsize": 8192, 00:18:52.520 "large_bufsize": 135168, 00:18:52.520 "enable_numa": false 00:18:52.520 } 00:18:52.520 } 00:18:52.520 ] 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "subsystem": "sock", 00:18:52.520 "config": [ 00:18:52.520 { 00:18:52.520 "method": "sock_set_default_impl", 00:18:52.520 "params": { 00:18:52.520 "impl_name": "posix" 00:18:52.520 } 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "method": "sock_impl_set_options", 00:18:52.520 "params": { 00:18:52.520 "impl_name": "ssl", 00:18:52.520 "recv_buf_size": 4096, 00:18:52.520 "send_buf_size": 4096, 00:18:52.520 "enable_recv_pipe": true, 00:18:52.520 "enable_quickack": false, 00:18:52.520 "enable_placement_id": 0, 00:18:52.520 "enable_zerocopy_send_server": true, 00:18:52.520 "enable_zerocopy_send_client": false, 00:18:52.520 "zerocopy_threshold": 0, 00:18:52.520 "tls_version": 0, 00:18:52.520 "enable_ktls": false 00:18:52.520 } 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "method": "sock_impl_set_options", 00:18:52.520 "params": { 00:18:52.520 "impl_name": "posix", 00:18:52.520 "recv_buf_size": 2097152, 00:18:52.520 "send_buf_size": 2097152, 00:18:52.520 "enable_recv_pipe": true, 00:18:52.520 "enable_quickack": false, 00:18:52.520 "enable_placement_id": 0, 00:18:52.520 "enable_zerocopy_send_server": true, 00:18:52.520 "enable_zerocopy_send_client": false, 00:18:52.520 "zerocopy_threshold": 0, 00:18:52.520 "tls_version": 0, 00:18:52.520 "enable_ktls": false 00:18:52.520 } 00:18:52.520 } 00:18:52.520 ] 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "subsystem": "vmd", 00:18:52.520 "config": [] 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "subsystem": "accel", 00:18:52.520 "config": [ 00:18:52.520 { 00:18:52.520 "method": "accel_set_options", 00:18:52.520 "params": { 00:18:52.520 "small_cache_size": 128, 00:18:52.520 "large_cache_size": 16, 00:18:52.520 "task_count": 2048, 00:18:52.520 "sequence_count": 2048, 00:18:52.520 "buf_count": 2048 00:18:52.520 } 00:18:52.520 } 00:18:52.520 ] 00:18:52.520 }, 00:18:52.520 { 00:18:52.520 "subsystem": "bdev", 00:18:52.520 "config": [ 00:18:52.520 { 00:18:52.520 "method": "bdev_set_options", 00:18:52.520 "params": { 00:18:52.520 "bdev_io_pool_size": 65535, 00:18:52.520 "bdev_io_cache_size": 256, 00:18:52.520 "bdev_auto_examine": true, 00:18:52.521 "iobuf_small_cache_size": 128, 00:18:52.521 "iobuf_large_cache_size": 16 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_raid_set_options", 00:18:52.521 "params": { 00:18:52.521 "process_window_size_kb": 1024, 00:18:52.521 "process_max_bandwidth_mb_sec": 0 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_iscsi_set_options", 00:18:52.521 "params": { 00:18:52.521 "timeout_sec": 30 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_nvme_set_options", 00:18:52.521 "params": { 00:18:52.521 "action_on_timeout": "none", 00:18:52.521 "timeout_us": 0, 00:18:52.521 "timeout_admin_us": 0, 00:18:52.521 "keep_alive_timeout_ms": 10000, 00:18:52.521 "arbitration_burst": 0, 00:18:52.521 "low_priority_weight": 0, 00:18:52.521 "medium_priority_weight": 0, 00:18:52.521 "high_priority_weight": 0, 00:18:52.521 "nvme_adminq_poll_period_us": 10000, 00:18:52.521 "nvme_ioq_poll_period_us": 0, 00:18:52.521 "io_queue_requests": 512, 00:18:52.521 "delay_cmd_submit": true, 00:18:52.521 "transport_retry_count": 4, 00:18:52.521 "bdev_retry_count": 3, 00:18:52.521 "transport_ack_timeout": 0, 00:18:52.521 "ctrlr_loss_timeout_sec": 0, 00:18:52.521 "reconnect_delay_sec": 0, 00:18:52.521 "fast_io_fail_timeout_sec": 0, 00:18:52.521 "disable_auto_failback": false, 00:18:52.521 "generate_uuids": false, 00:18:52.521 "transport_tos": 0, 00:18:52.521 "nvme_error_stat": false, 00:18:52.521 "rdma_srq_size": 0, 00:18:52.521 "io_path_stat": false, 00:18:52.521 "allow_accel_sequence": false, 00:18:52.521 "rdma_max_cq_size": 0, 00:18:52.521 "rdma_cm_event_timeout_ms": 0, 00:18:52.521 "dhchap_digests": [ 00:18:52.521 "sha256", 00:18:52.521 "sha384", 00:18:52.521 "sha512" 00:18:52.521 ], 00:18:52.521 "dhchap_dhgroups": [ 00:18:52.521 "null", 00:18:52.521 "ffdhe2048", 00:18:52.521 "ffdhe3072", 00:18:52.521 "ffdhe4096", 00:18:52.521 "ffdhe6144", 00:18:52.521 "ffdhe8192" 00:18:52.521 ] 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_nvme_attach_controller", 00:18:52.521 "params": { 00:18:52.521 "name": "nvme0", 00:18:52.521 "trtype": "TCP", 00:18:52.521 "adrfam": "IPv4", 00:18:52.521 "traddr": "10.0.0.2", 00:18:52.521 "trsvcid": "4420", 00:18:52.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.521 "prchk_reftag": false, 00:18:52.521 "prchk_guard": false, 00:18:52.521 "ctrlr_loss_timeout_sec": 0, 00:18:52.521 "reconnect_delay_sec": 0, 00:18:52.521 "fast_io_fail_timeout_sec": 0, 00:18:52.521 "psk": "key0", 00:18:52.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.521 "hdgst": false, 00:18:52.521 "ddgst": false, 00:18:52.521 "multipath": "multipath" 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_nvme_set_hotplug", 00:18:52.521 "params": { 00:18:52.521 "period_us": 100000, 00:18:52.521 "enable": false 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_enable_histogram", 00:18:52.521 "params": { 00:18:52.521 "name": "nvme0n1", 00:18:52.521 "enable": true 00:18:52.521 } 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "method": "bdev_wait_for_examine" 00:18:52.521 } 00:18:52.521 ] 00:18:52.521 }, 00:18:52.521 { 00:18:52.521 "subsystem": "nbd", 00:18:52.521 "config": [] 00:18:52.521 } 00:18:52.521 ] 00:18:52.521 }' 00:18:52.521 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3509596 00:18:52.521 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3509596 ']' 00:18:52.521 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3509596 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509596 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509596' 00:18:52.521 killing process with pid 3509596 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3509596 00:18:52.521 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.521 00:18:52.521 Latency(us) 00:18:52.521 [2024-11-20T09:35:53.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.521 [2024-11-20T09:35:53.252Z] =================================================================================================================== 00:18:52.521 [2024-11-20T09:35:53.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3509596 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3509425 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3509425 ']' 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3509425 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.521 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509425 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509425' 00:18:52.781 killing process with pid 3509425 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3509425 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3509425 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.781 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:52.781 "subsystems": [ 00:18:52.781 { 00:18:52.781 "subsystem": "keyring", 00:18:52.781 "config": [ 00:18:52.781 { 00:18:52.781 "method": "keyring_file_add_key", 00:18:52.781 "params": { 00:18:52.781 "name": "key0", 00:18:52.781 "path": "/tmp/tmp.lsJe3HrWob" 00:18:52.781 } 00:18:52.781 } 00:18:52.781 ] 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "subsystem": "iobuf", 00:18:52.781 "config": [ 00:18:52.781 { 00:18:52.781 "method": "iobuf_set_options", 00:18:52.781 "params": { 00:18:52.781 "small_pool_count": 8192, 00:18:52.781 "large_pool_count": 1024, 00:18:52.781 "small_bufsize": 8192, 00:18:52.781 "large_bufsize": 135168, 00:18:52.781 "enable_numa": false 00:18:52.781 } 00:18:52.781 } 00:18:52.781 ] 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "subsystem": "sock", 00:18:52.781 "config": [ 00:18:52.781 { 00:18:52.781 "method": "sock_set_default_impl", 00:18:52.781 "params": { 00:18:52.781 "impl_name": "posix" 00:18:52.781 } 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "method": "sock_impl_set_options", 00:18:52.781 "params": { 00:18:52.781 "impl_name": "ssl", 00:18:52.781 "recv_buf_size": 4096, 00:18:52.781 "send_buf_size": 4096, 00:18:52.781 "enable_recv_pipe": true, 00:18:52.781 "enable_quickack": false, 00:18:52.781 "enable_placement_id": 0, 00:18:52.781 "enable_zerocopy_send_server": true, 00:18:52.781 "enable_zerocopy_send_client": false, 00:18:52.781 "zerocopy_threshold": 0, 00:18:52.781 "tls_version": 0, 00:18:52.781 "enable_ktls": false 00:18:52.781 } 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "method": "sock_impl_set_options", 00:18:52.781 "params": { 00:18:52.781 "impl_name": "posix", 00:18:52.781 "recv_buf_size": 2097152, 00:18:52.781 "send_buf_size": 2097152, 00:18:52.781 "enable_recv_pipe": true, 00:18:52.781 "enable_quickack": false, 00:18:52.781 "enable_placement_id": 0, 00:18:52.781 "enable_zerocopy_send_server": true, 00:18:52.781 "enable_zerocopy_send_client": false, 00:18:52.781 "zerocopy_threshold": 0, 00:18:52.781 "tls_version": 0, 00:18:52.781 "enable_ktls": false 00:18:52.781 } 00:18:52.781 } 00:18:52.781 ] 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "subsystem": "vmd", 00:18:52.781 "config": [] 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "subsystem": "accel", 00:18:52.781 "config": [ 00:18:52.781 { 00:18:52.781 "method": "accel_set_options", 00:18:52.781 "params": { 00:18:52.781 "small_cache_size": 128, 00:18:52.781 "large_cache_size": 16, 00:18:52.781 "task_count": 2048, 00:18:52.781 "sequence_count": 2048, 00:18:52.781 "buf_count": 2048 00:18:52.781 } 00:18:52.781 } 00:18:52.781 ] 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "subsystem": "bdev", 00:18:52.781 "config": [ 00:18:52.781 { 00:18:52.781 "method": "bdev_set_options", 00:18:52.781 "params": { 00:18:52.781 "bdev_io_pool_size": 65535, 00:18:52.781 "bdev_io_cache_size": 256, 00:18:52.781 "bdev_auto_examine": true, 00:18:52.781 "iobuf_small_cache_size": 128, 00:18:52.781 "iobuf_large_cache_size": 16 00:18:52.781 } 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "method": "bdev_raid_set_options", 00:18:52.781 "params": { 00:18:52.781 "process_window_size_kb": 1024, 00:18:52.781 "process_max_bandwidth_mb_sec": 0 00:18:52.781 } 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "method": "bdev_iscsi_set_options", 00:18:52.781 "params": { 00:18:52.781 "timeout_sec": 30 00:18:52.781 } 00:18:52.781 }, 00:18:52.781 { 00:18:52.781 "method": "bdev_nvme_set_options", 00:18:52.781 "params": { 00:18:52.781 "action_on_timeout": "none", 00:18:52.781 "timeout_us": 0, 00:18:52.781 "timeout_admin_us": 0, 00:18:52.781 "keep_alive_timeout_ms": 10000, 00:18:52.781 "arbitration_burst": 0, 00:18:52.781 "low_priority_weight": 0, 00:18:52.781 "medium_priority_weight": 0, 00:18:52.781 "high_priority_weight": 0, 00:18:52.781 "nvme_adminq_poll_period_us": 10000, 00:18:52.781 "nvme_ioq_poll_period_us": 0, 00:18:52.781 "io_queue_requests": 0, 00:18:52.782 "delay_cmd_submit": true, 00:18:52.782 "transport_retry_count": 4, 00:18:52.782 "bdev_retry_count": 3, 00:18:52.782 "transport_ack_timeout": 0, 00:18:52.782 "ctrlr_loss_timeout_sec": 0, 00:18:52.782 "reconnect_delay_sec": 0, 00:18:52.782 "fast_io_fail_timeout_sec": 0, 00:18:52.782 "disable_auto_failback": false, 00:18:52.782 "generate_uuids": false, 00:18:52.782 "transport_tos": 0, 00:18:52.782 "nvme_error_stat": false, 00:18:52.782 "rdma_srq_size": 0, 00:18:52.782 "io_path_stat": false, 00:18:52.782 "allow_accel_sequence": false, 00:18:52.782 "rdma_max_cq_size": 0, 00:18:52.782 "rdma_cm_event_timeout_ms": 0, 00:18:52.782 "dhchap_digests": [ 00:18:52.782 "sha256", 00:18:52.782 "sha384", 00:18:52.782 "sha512" 00:18:52.782 ], 00:18:52.782 "dhchap_dhgroups": [ 00:18:52.782 "null", 00:18:52.782 "ffdhe2048", 00:18:52.782 "ffdhe3072", 00:18:52.782 "ffdhe4096", 00:18:52.782 "ffdhe6144", 00:18:52.782 "ffdhe8192" 00:18:52.782 ] 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "bdev_nvme_set_hotplug", 00:18:52.782 "params": { 00:18:52.782 "period_us": 100000, 00:18:52.782 "enable": false 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "bdev_malloc_create", 00:18:52.782 "params": { 00:18:52.782 "name": "malloc0", 00:18:52.782 "num_blocks": 8192, 00:18:52.782 "block_size": 4096, 00:18:52.782 "physical_block_size": 4096, 00:18:52.782 "uuid": "f83ea412-cd8d-4037-ad99-c31e04c43739", 00:18:52.782 "optimal_io_boundary": 0, 00:18:52.782 "md_size": 0, 00:18:52.782 "dif_type": 0, 00:18:52.782 "dif_is_head_of_md": false, 00:18:52.782 "dif_pi_format": 0 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "bdev_wait_for_examine" 00:18:52.782 } 00:18:52.782 ] 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "subsystem": "nbd", 00:18:52.782 "config": [] 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "subsystem": "scheduler", 00:18:52.782 "config": [ 00:18:52.782 { 00:18:52.782 "method": "framework_set_scheduler", 00:18:52.782 "params": { 00:18:52.782 "name": "static" 00:18:52.782 } 00:18:52.782 } 00:18:52.782 ] 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "subsystem": "nvmf", 00:18:52.782 "config": [ 00:18:52.782 { 00:18:52.782 "method": "nvmf_set_config", 00:18:52.782 "params": { 00:18:52.782 "discovery_filter": "match_any", 00:18:52.782 "admin_cmd_passthru": { 00:18:52.782 "identify_ctrlr": false 00:18:52.782 }, 00:18:52.782 "dhchap_digests": [ 00:18:52.782 "sha256", 00:18:52.782 "sha384", 00:18:52.782 "sha512" 00:18:52.782 ], 00:18:52.782 "dhchap_dhgroups": [ 00:18:52.782 "null", 00:18:52.782 "ffdhe2048", 00:18:52.782 "ffdhe3072", 00:18:52.782 "ffdhe4096", 00:18:52.782 "ffdhe6144", 00:18:52.782 "ffdhe8192" 00:18:52.782 ] 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_set_max_subsystems", 00:18:52.782 "params": { 00:18:52.782 "max_subsystems": 1024 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_set_crdt", 00:18:52.782 "params": { 00:18:52.782 "crdt1": 0, 00:18:52.782 "crdt2": 0, 00:18:52.782 "crdt3": 0 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_create_transport", 00:18:52.782 "params": { 00:18:52.782 "trtype": "TCP", 00:18:52.782 "max_queue_depth": 128, 00:18:52.782 "max_io_qpairs_per_ctrlr": 127, 00:18:52.782 "in_capsule_data_size": 4096, 00:18:52.782 "max_io_size": 131072, 00:18:52.782 "io_unit_size": 131072, 00:18:52.782 "max_aq_depth": 128, 00:18:52.782 "num_shared_buffers": 511, 00:18:52.782 "buf_cache_size": 4294967295, 00:18:52.782 "dif_insert_or_strip": false, 00:18:52.782 "zcopy": false, 00:18:52.782 "c2h_success": false, 00:18:52.782 "sock_priority": 0, 00:18:52.782 "abort_timeout_sec": 1, 00:18:52.782 "ack_timeout": 0, 00:18:52.782 "data_wr_pool_size": 0 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_create_subsystem", 00:18:52.782 "params": { 00:18:52.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.782 "allow_any_host": false, 00:18:52.782 "serial_number": "00000000000000000000", 00:18:52.782 "model_number": "SPDK bdev Controller", 00:18:52.782 "max_namespaces": 32, 00:18:52.782 "min_cntlid": 1, 00:18:52.782 "max_cntlid": 65519, 00:18:52.782 "ana_reporting": false 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_subsystem_add_host", 00:18:52.782 "params": { 00:18:52.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.782 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.782 "psk": "key0" 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_subsystem_add_ns", 00:18:52.782 "params": { 00:18:52.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.782 "namespace": { 00:18:52.782 "nsid": 1, 00:18:52.782 "bdev_name": "malloc0", 00:18:52.782 "nguid": "F83EA412CD8D4037AD99C31E04C43739", 00:18:52.782 "uuid": "f83ea412-cd8d-4037-ad99-c31e04c43739", 00:18:52.782 "no_auto_visible": false 00:18:52.782 } 00:18:52.782 } 00:18:52.782 }, 00:18:52.782 { 00:18:52.782 "method": "nvmf_subsystem_add_listener", 00:18:52.782 "params": { 00:18:52.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.782 "listen_address": { 00:18:52.782 "trtype": "TCP", 00:18:52.782 "adrfam": "IPv4", 00:18:52.782 "traddr": "10.0.0.2", 00:18:52.782 "trsvcid": "4420" 00:18:52.782 }, 00:18:52.782 "secure_channel": false, 00:18:52.782 "sock_impl": "ssl" 00:18:52.782 } 00:18:52.782 } 00:18:52.782 ] 00:18:52.782 } 00:18:52.782 ] 00:18:52.782 }' 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3510034 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3510034 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3510034 ']' 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.782 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.782 [2024-11-20 10:35:53.492863] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:52.782 [2024-11-20 10:35:53.492909] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.041 [2024-11-20 10:35:53.570789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.041 [2024-11-20 10:35:53.611506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.041 [2024-11-20 10:35:53.611542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.041 [2024-11-20 10:35:53.611550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.041 [2024-11-20 10:35:53.611556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.041 [2024-11-20 10:35:53.611561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.041 [2024-11-20 10:35:53.612160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.300 [2024-11-20 10:35:53.824910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.300 [2024-11-20 10:35:53.856951] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.301 [2024-11-20 10:35:53.857163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.868 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3510104 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3510104 /var/tmp/bdevperf.sock 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3510104 ']' 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.869 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:53.869 "subsystems": [ 00:18:53.869 { 00:18:53.869 "subsystem": "keyring", 00:18:53.869 "config": [ 00:18:53.869 { 00:18:53.869 "method": "keyring_file_add_key", 00:18:53.869 "params": { 00:18:53.869 "name": "key0", 00:18:53.869 "path": "/tmp/tmp.lsJe3HrWob" 00:18:53.869 } 00:18:53.869 } 00:18:53.869 ] 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "subsystem": "iobuf", 00:18:53.869 "config": [ 00:18:53.869 { 00:18:53.869 "method": "iobuf_set_options", 00:18:53.869 "params": { 00:18:53.869 "small_pool_count": 8192, 00:18:53.869 "large_pool_count": 1024, 00:18:53.869 "small_bufsize": 8192, 00:18:53.869 "large_bufsize": 135168, 00:18:53.869 "enable_numa": false 00:18:53.869 } 00:18:53.869 } 00:18:53.869 ] 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "subsystem": "sock", 00:18:53.869 "config": [ 00:18:53.869 { 00:18:53.869 "method": "sock_set_default_impl", 00:18:53.869 "params": { 00:18:53.869 "impl_name": "posix" 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "sock_impl_set_options", 00:18:53.869 "params": { 00:18:53.869 "impl_name": "ssl", 00:18:53.869 "recv_buf_size": 4096, 00:18:53.869 "send_buf_size": 4096, 00:18:53.869 "enable_recv_pipe": true, 00:18:53.869 "enable_quickack": false, 00:18:53.869 "enable_placement_id": 0, 00:18:53.869 "enable_zerocopy_send_server": true, 00:18:53.869 "enable_zerocopy_send_client": false, 00:18:53.869 "zerocopy_threshold": 0, 00:18:53.869 "tls_version": 0, 00:18:53.869 "enable_ktls": false 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "sock_impl_set_options", 00:18:53.869 "params": { 00:18:53.869 "impl_name": "posix", 00:18:53.869 "recv_buf_size": 2097152, 00:18:53.869 "send_buf_size": 2097152, 00:18:53.869 "enable_recv_pipe": true, 00:18:53.869 "enable_quickack": false, 00:18:53.869 "enable_placement_id": 0, 00:18:53.869 "enable_zerocopy_send_server": true, 00:18:53.869 "enable_zerocopy_send_client": false, 00:18:53.869 "zerocopy_threshold": 0, 00:18:53.869 "tls_version": 0, 00:18:53.869 "enable_ktls": false 00:18:53.869 } 00:18:53.869 } 00:18:53.869 ] 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "subsystem": "vmd", 00:18:53.869 "config": [] 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "subsystem": "accel", 00:18:53.869 "config": [ 00:18:53.869 { 00:18:53.869 "method": "accel_set_options", 00:18:53.869 "params": { 00:18:53.869 "small_cache_size": 128, 00:18:53.869 "large_cache_size": 16, 00:18:53.869 "task_count": 2048, 00:18:53.869 "sequence_count": 2048, 00:18:53.869 "buf_count": 2048 00:18:53.869 } 00:18:53.869 } 00:18:53.869 ] 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "subsystem": "bdev", 00:18:53.869 "config": [ 00:18:53.869 { 00:18:53.869 "method": "bdev_set_options", 00:18:53.869 "params": { 00:18:53.869 "bdev_io_pool_size": 65535, 00:18:53.869 "bdev_io_cache_size": 256, 00:18:53.869 "bdev_auto_examine": true, 00:18:53.869 "iobuf_small_cache_size": 128, 00:18:53.869 "iobuf_large_cache_size": 16 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_raid_set_options", 00:18:53.869 "params": { 00:18:53.869 "process_window_size_kb": 1024, 00:18:53.869 "process_max_bandwidth_mb_sec": 0 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_iscsi_set_options", 00:18:53.869 "params": { 00:18:53.869 "timeout_sec": 30 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_nvme_set_options", 00:18:53.869 "params": { 00:18:53.869 "action_on_timeout": "none", 00:18:53.869 "timeout_us": 0, 00:18:53.869 "timeout_admin_us": 0, 00:18:53.869 "keep_alive_timeout_ms": 10000, 00:18:53.869 "arbitration_burst": 0, 00:18:53.869 "low_priority_weight": 0, 00:18:53.869 "medium_priority_weight": 0, 00:18:53.869 "high_priority_weight": 0, 00:18:53.869 "nvme_adminq_poll_period_us": 10000, 00:18:53.869 "nvme_ioq_poll_period_us": 0, 00:18:53.869 "io_queue_requests": 512, 00:18:53.869 "delay_cmd_submit": true, 00:18:53.869 "transport_retry_count": 4, 00:18:53.869 "bdev_retry_count": 3, 00:18:53.869 "transport_ack_timeout": 0, 00:18:53.869 "ctrlr_loss_timeout_sec": 0, 00:18:53.869 "reconnect_delay_sec": 0, 00:18:53.869 "fast_io_fail_timeout_sec": 0, 00:18:53.869 "disable_auto_failback": false, 00:18:53.869 "generate_uuids": false, 00:18:53.869 "transport_tos": 0, 00:18:53.869 "nvme_error_stat": false, 00:18:53.869 "rdma_srq_size": 0, 00:18:53.869 "io_path_stat": false, 00:18:53.869 "allow_accel_sequence": false, 00:18:53.869 "rdma_max_cq_size": 0, 00:18:53.869 "rdma_cm_event_timeout_ms": 0, 00:18:53.869 "dhchap_digests": [ 00:18:53.869 "sha256", 00:18:53.869 "sha384", 00:18:53.869 "sha512" 00:18:53.869 ], 00:18:53.869 "dhchap_dhgroups": [ 00:18:53.869 "null", 00:18:53.869 "ffdhe2048", 00:18:53.869 "ffdhe3072", 00:18:53.869 "ffdhe4096", 00:18:53.869 "ffdhe6144", 00:18:53.869 "ffdhe8192" 00:18:53.869 ] 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_nvme_attach_controller", 00:18:53.869 "params": { 00:18:53.869 "name": "nvme0", 00:18:53.869 "trtype": "TCP", 00:18:53.869 "adrfam": "IPv4", 00:18:53.869 "traddr": "10.0.0.2", 00:18:53.869 "trsvcid": "4420", 00:18:53.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.869 "prchk_reftag": false, 00:18:53.869 "prchk_guard": false, 00:18:53.869 "ctrlr_loss_timeout_sec": 0, 00:18:53.869 "reconnect_delay_sec": 0, 00:18:53.869 "fast_io_fail_timeout_sec": 0, 00:18:53.869 "psk": "key0", 00:18:53.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.869 "hdgst": false, 00:18:53.869 "ddgst": false, 00:18:53.869 "multipath": "multipath" 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_nvme_set_hotplug", 00:18:53.869 "params": { 00:18:53.869 "period_us": 100000, 00:18:53.869 "enable": false 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_enable_histogram", 00:18:53.869 "params": { 00:18:53.869 "name": "nvme0n1", 00:18:53.869 "enable": true 00:18:53.869 } 00:18:53.869 }, 00:18:53.869 { 00:18:53.869 "method": "bdev_wait_for_examine" 00:18:53.869 } 00:18:53.869 ] 00:18:53.870 }, 00:18:53.870 { 00:18:53.870 "subsystem": "nbd", 00:18:53.870 "config": [] 00:18:53.870 } 00:18:53.870 ] 00:18:53.870 }' 00:18:53.870 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.870 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.870 [2024-11-20 10:35:54.403016] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:18:53.870 [2024-11-20 10:35:54.403062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510104 ] 00:18:53.870 [2024-11-20 10:35:54.476462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.870 [2024-11-20 10:35:54.516922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.128 [2024-11-20 10:35:54.669549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.695 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.695 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.695 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:54.695 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:54.954 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.954 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.954 Running I/O for 1 seconds... 00:18:55.892 5334.00 IOPS, 20.84 MiB/s 00:18:55.892 Latency(us) 00:18:55.892 [2024-11-20T09:35:56.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.892 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.892 Verification LBA range: start 0x0 length 0x2000 00:18:55.892 nvme0n1 : 1.02 5334.38 20.84 0.00 0.00 23766.80 6724.56 25188.62 00:18:55.892 [2024-11-20T09:35:56.623Z] =================================================================================================================== 00:18:55.892 [2024-11-20T09:35:56.623Z] Total : 5334.38 20.84 0.00 0.00 23766.80 6724.56 25188.62 00:18:55.892 { 00:18:55.892 "results": [ 00:18:55.892 { 00:18:55.892 "job": "nvme0n1", 00:18:55.892 "core_mask": "0x2", 00:18:55.892 "workload": "verify", 00:18:55.892 "status": "finished", 00:18:55.892 "verify_range": { 00:18:55.892 "start": 0, 00:18:55.892 "length": 8192 00:18:55.892 }, 00:18:55.892 "queue_depth": 128, 00:18:55.892 "io_size": 4096, 00:18:55.892 "runtime": 1.024112, 00:18:55.892 "iops": 5334.377489962036, 00:18:55.892 "mibps": 20.837412070164202, 00:18:55.892 "io_failed": 0, 00:18:55.892 "io_timeout": 0, 00:18:55.892 "avg_latency_us": 23766.79738382319, 00:18:55.892 "min_latency_us": 6724.5634782608695, 00:18:55.892 "max_latency_us": 25188.61913043478 00:18:55.892 } 00:18:55.892 ], 00:18:55.892 "core_count": 1 00:18:55.892 } 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:55.892 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:55.892 nvmf_trace.0 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3510104 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3510104 ']' 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3510104 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3510104 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3510104' 00:18:56.151 killing process with pid 3510104 00:18:56.151 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3510104 00:18:56.151 Received shutdown signal, test time was about 1.000000 seconds 00:18:56.151 00:18:56.151 Latency(us) 00:18:56.151 [2024-11-20T09:35:56.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.151 [2024-11-20T09:35:56.882Z] =================================================================================================================== 00:18:56.151 [2024-11-20T09:35:56.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.152 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3510104 00:18:56.410 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:56.410 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.410 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:56.410 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.410 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.411 rmmod nvme_tcp 00:18:56.411 rmmod nvme_fabrics 00:18:56.411 rmmod nvme_keyring 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3510034 ']' 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3510034 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3510034 ']' 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3510034 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3510034 00:18:56.411 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.411 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.411 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3510034' 00:18:56.411 killing process with pid 3510034 00:18:56.411 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3510034 00:18:56.411 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3510034 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.670 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tyeaEkQEMM /tmp/tmp.0ukz7ZHTHC /tmp/tmp.lsJe3HrWob 00:18:58.686 00:18:58.686 real 1m19.699s 00:18:58.686 user 2m2.238s 00:18:58.686 sys 0m30.483s 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.686 ************************************ 00:18:58.686 END TEST nvmf_tls 00:18:58.686 ************************************ 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.686 ************************************ 00:18:58.686 START TEST nvmf_fips 00:18:58.686 ************************************ 00:18:58.686 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.686 * Looking for test storage... 00:18:58.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.945 --rc genhtml_branch_coverage=1 00:18:58.945 --rc genhtml_function_coverage=1 00:18:58.945 --rc genhtml_legend=1 00:18:58.945 --rc geninfo_all_blocks=1 00:18:58.945 --rc geninfo_unexecuted_blocks=1 00:18:58.945 00:18:58.945 ' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.945 --rc genhtml_branch_coverage=1 00:18:58.945 --rc genhtml_function_coverage=1 00:18:58.945 --rc genhtml_legend=1 00:18:58.945 --rc geninfo_all_blocks=1 00:18:58.945 --rc geninfo_unexecuted_blocks=1 00:18:58.945 00:18:58.945 ' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.945 --rc genhtml_branch_coverage=1 00:18:58.945 --rc genhtml_function_coverage=1 00:18:58.945 --rc genhtml_legend=1 00:18:58.945 --rc geninfo_all_blocks=1 00:18:58.945 --rc geninfo_unexecuted_blocks=1 00:18:58.945 00:18:58.945 ' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.945 --rc genhtml_branch_coverage=1 00:18:58.945 --rc genhtml_function_coverage=1 00:18:58.945 --rc genhtml_legend=1 00:18:58.945 --rc geninfo_all_blocks=1 00:18:58.945 --rc geninfo_unexecuted_blocks=1 00:18:58.945 00:18:58.945 ' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.945 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:58.946 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:59.204 Error setting digest 00:18:59.204 40E2973C947F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:59.204 40E2973C947F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:59.204 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.205 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:05.775 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:05.775 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:05.775 Found net devices under 0000:86:00.0: cvl_0_0 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.775 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:05.775 Found net devices under 0000:86:00.1: cvl_0_1 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:05.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:19:05.776 00:19:05.776 --- 10.0.0.2 ping statistics --- 00:19:05.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.776 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:05.776 00:19:05.776 --- 10.0.0.1 ping statistics --- 00:19:05.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.776 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3514262 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3514262 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3514262 ']' 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.776 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.776 [2024-11-20 10:36:05.751434] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:19:05.776 [2024-11-20 10:36:05.751477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.776 [2024-11-20 10:36:05.829059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.776 [2024-11-20 10:36:05.868660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.776 [2024-11-20 10:36:05.868691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.776 [2024-11-20 10:36:05.868698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.776 [2024-11-20 10:36:05.868704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.776 [2024-11-20 10:36:05.868709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.776 [2024-11-20 10:36:05.869274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.4Mg 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.4Mg 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.4Mg 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.4Mg 00:19:06.035 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.295 [2024-11-20 10:36:06.790428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.295 [2024-11-20 10:36:06.806434] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.295 [2024-11-20 10:36:06.806636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.295 malloc0 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3514510 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3514510 /var/tmp/bdevperf.sock 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3514510 ']' 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.295 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.295 [2024-11-20 10:36:06.938406] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:19:06.295 [2024-11-20 10:36:06.938453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514510 ] 00:19:06.295 [2024-11-20 10:36:07.015404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.554 [2024-11-20 10:36:07.056800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.123 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.123 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:07.123 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.4Mg 00:19:07.381 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.640 [2024-11-20 10:36:08.142646] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.640 TLSTESTn1 00:19:07.640 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.640 Running I/O for 10 seconds... 00:19:09.952 5390.00 IOPS, 21.05 MiB/s [2024-11-20T09:36:11.616Z] 5453.50 IOPS, 21.30 MiB/s [2024-11-20T09:36:12.553Z] 5400.33 IOPS, 21.10 MiB/s [2024-11-20T09:36:13.486Z] 5396.50 IOPS, 21.08 MiB/s [2024-11-20T09:36:14.422Z] 5427.20 IOPS, 21.20 MiB/s [2024-11-20T09:36:15.357Z] 5419.83 IOPS, 21.17 MiB/s [2024-11-20T09:36:16.733Z] 5439.86 IOPS, 21.25 MiB/s [2024-11-20T09:36:17.669Z] 5324.88 IOPS, 20.80 MiB/s [2024-11-20T09:36:18.604Z] 5261.00 IOPS, 20.55 MiB/s [2024-11-20T09:36:18.604Z] 5217.70 IOPS, 20.38 MiB/s 00:19:17.873 Latency(us) 00:19:17.873 [2024-11-20T09:36:18.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.873 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.873 Verification LBA range: start 0x0 length 0x2000 00:19:17.873 TLSTESTn1 : 10.02 5218.28 20.38 0.00 0.00 24485.06 5185.89 31913.18 00:19:17.873 [2024-11-20T09:36:18.604Z] =================================================================================================================== 00:19:17.873 [2024-11-20T09:36:18.604Z] Total : 5218.28 20.38 0.00 0.00 24485.06 5185.89 31913.18 00:19:17.873 { 00:19:17.873 "results": [ 00:19:17.873 { 00:19:17.873 "job": "TLSTESTn1", 00:19:17.873 "core_mask": "0x4", 00:19:17.873 "workload": "verify", 00:19:17.873 "status": "finished", 00:19:17.873 "verify_range": { 00:19:17.873 "start": 0, 00:19:17.873 "length": 8192 00:19:17.873 }, 00:19:17.873 "queue_depth": 128, 00:19:17.873 "io_size": 4096, 00:19:17.873 "runtime": 10.022843, 00:19:17.873 "iops": 5218.279883262663, 00:19:17.873 "mibps": 20.383905793994778, 00:19:17.873 "io_failed": 0, 00:19:17.873 "io_timeout": 0, 00:19:17.873 "avg_latency_us": 24485.06349232634, 00:19:17.873 "min_latency_us": 5185.892173913044, 00:19:17.873 "max_latency_us": 31913.182608695653 00:19:17.873 } 00:19:17.873 ], 00:19:17.873 "core_count": 1 00:19:17.873 } 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:17.873 nvmf_trace.0 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3514510 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3514510 ']' 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3514510 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3514510 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3514510' 00:19:17.873 killing process with pid 3514510 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3514510 00:19:17.873 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.873 00:19:17.873 Latency(us) 00:19:17.873 [2024-11-20T09:36:18.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.873 [2024-11-20T09:36:18.604Z] =================================================================================================================== 00:19:17.873 [2024-11-20T09:36:18.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.873 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3514510 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.132 rmmod nvme_tcp 00:19:18.132 rmmod nvme_fabrics 00:19:18.132 rmmod nvme_keyring 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3514262 ']' 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3514262 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3514262 ']' 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3514262 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3514262 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3514262' 00:19:18.132 killing process with pid 3514262 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3514262 00:19:18.132 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3514262 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.391 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.4Mg 00:19:20.929 00:19:20.929 real 0m21.729s 00:19:20.929 user 0m23.217s 00:19:20.929 sys 0m9.988s 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.929 ************************************ 00:19:20.929 END TEST nvmf_fips 00:19:20.929 ************************************ 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.929 ************************************ 00:19:20.929 START TEST nvmf_control_msg_list 00:19:20.929 ************************************ 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:20.929 * Looking for test storage... 00:19:20.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.929 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.930 --rc genhtml_branch_coverage=1 00:19:20.930 --rc genhtml_function_coverage=1 00:19:20.930 --rc genhtml_legend=1 00:19:20.930 --rc geninfo_all_blocks=1 00:19:20.930 --rc geninfo_unexecuted_blocks=1 00:19:20.930 00:19:20.930 ' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.930 --rc genhtml_branch_coverage=1 00:19:20.930 --rc genhtml_function_coverage=1 00:19:20.930 --rc genhtml_legend=1 00:19:20.930 --rc geninfo_all_blocks=1 00:19:20.930 --rc geninfo_unexecuted_blocks=1 00:19:20.930 00:19:20.930 ' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.930 --rc genhtml_branch_coverage=1 00:19:20.930 --rc genhtml_function_coverage=1 00:19:20.930 --rc genhtml_legend=1 00:19:20.930 --rc geninfo_all_blocks=1 00:19:20.930 --rc geninfo_unexecuted_blocks=1 00:19:20.930 00:19:20.930 ' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.930 --rc genhtml_branch_coverage=1 00:19:20.930 --rc genhtml_function_coverage=1 00:19:20.930 --rc genhtml_legend=1 00:19:20.930 --rc geninfo_all_blocks=1 00:19:20.930 --rc geninfo_unexecuted_blocks=1 00:19:20.930 00:19:20.930 ' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.930 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.931 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:26.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:26.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:26.212 Found net devices under 0000:86:00.0: cvl_0_0 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:26.212 Found net devices under 0000:86:00.1: cvl_0_1 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.212 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.213 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.472 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.472 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.472 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:19:26.472 00:19:26.472 --- 10.0.0.2 ping statistics --- 00:19:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.472 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:26.472 00:19:26.472 --- 10.0.0.1 ping statistics --- 00:19:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.472 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3520279 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3520279 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3520279 ']' 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.472 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.473 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.473 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.473 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.732 [2024-11-20 10:36:27.232461] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:19:26.732 [2024-11-20 10:36:27.232504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.732 [2024-11-20 10:36:27.311772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.732 [2024-11-20 10:36:27.351546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.732 [2024-11-20 10:36:27.351581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.732 [2024-11-20 10:36:27.351588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.732 [2024-11-20 10:36:27.351594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.732 [2024-11-20 10:36:27.351599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.732 [2024-11-20 10:36:27.352166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.732 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.732 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:26.732 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.732 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.732 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 [2024-11-20 10:36:27.500490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 Malloc0 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 [2024-11-20 10:36:27.540967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3520487 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3520489 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3520491 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.991 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3520487 00:19:26.991 [2024-11-20 10:36:27.629679] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.991 [2024-11-20 10:36:27.629864] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.991 [2024-11-20 10:36:27.630032] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:28.370 Initializing NVMe Controllers 00:19:28.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:28.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:28.370 Initialization complete. Launching workers. 00:19:28.370 ======================================================== 00:19:28.370 Latency(us) 00:19:28.370 Device Information : IOPS MiB/s Average min max 00:19:28.370 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5030.99 19.65 198.39 123.36 41738.18 00:19:28.370 ======================================================== 00:19:28.370 Total : 5030.99 19.65 198.39 123.36 41738.18 00:19:28.370 00:19:28.370 Initializing NVMe Controllers 00:19:28.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:28.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:28.370 Initialization complete. Launching workers. 00:19:28.370 ======================================================== 00:19:28.370 Latency(us) 00:19:28.370 Device Information : IOPS MiB/s Average min max 00:19:28.370 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5406.00 21.12 184.60 115.72 354.66 00:19:28.370 ======================================================== 00:19:28.371 Total : 5406.00 21.12 184.60 115.72 354.66 00:19:28.371 00:19:28.371 Initializing NVMe Controllers 00:19:28.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:28.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:28.371 Initialization complete. Launching workers. 00:19:28.371 ======================================================== 00:19:28.371 Latency(us) 00:19:28.371 Device Information : IOPS MiB/s Average min max 00:19:28.371 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4166.00 16.27 240.50 140.78 41197.87 00:19:28.371 ======================================================== 00:19:28.371 Total : 4166.00 16.27 240.50 140.78 41197.87 00:19:28.371 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3520489 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3520491 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.371 rmmod nvme_tcp 00:19:28.371 rmmod nvme_fabrics 00:19:28.371 rmmod nvme_keyring 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3520279 ']' 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3520279 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3520279 ']' 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3520279 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3520279 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3520279' 00:19:28.371 killing process with pid 3520279 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3520279 00:19:28.371 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3520279 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.629 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:30.533 00:19:30.533 real 0m10.058s 00:19:30.533 user 0m6.620s 00:19:30.533 sys 0m5.431s 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.533 ************************************ 00:19:30.533 END TEST nvmf_control_msg_list 00:19:30.533 ************************************ 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.533 10:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.791 ************************************ 00:19:30.791 START TEST nvmf_wait_for_buf 00:19:30.791 ************************************ 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:30.791 * Looking for test storage... 00:19:30.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.791 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.792 --rc genhtml_branch_coverage=1 00:19:30.792 --rc genhtml_function_coverage=1 00:19:30.792 --rc genhtml_legend=1 00:19:30.792 --rc geninfo_all_blocks=1 00:19:30.792 --rc geninfo_unexecuted_blocks=1 00:19:30.792 00:19:30.792 ' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.792 --rc genhtml_branch_coverage=1 00:19:30.792 --rc genhtml_function_coverage=1 00:19:30.792 --rc genhtml_legend=1 00:19:30.792 --rc geninfo_all_blocks=1 00:19:30.792 --rc geninfo_unexecuted_blocks=1 00:19:30.792 00:19:30.792 ' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.792 --rc genhtml_branch_coverage=1 00:19:30.792 --rc genhtml_function_coverage=1 00:19:30.792 --rc genhtml_legend=1 00:19:30.792 --rc geninfo_all_blocks=1 00:19:30.792 --rc geninfo_unexecuted_blocks=1 00:19:30.792 00:19:30.792 ' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.792 --rc genhtml_branch_coverage=1 00:19:30.792 --rc genhtml_function_coverage=1 00:19:30.792 --rc genhtml_legend=1 00:19:30.792 --rc geninfo_all_blocks=1 00:19:30.792 --rc geninfo_unexecuted_blocks=1 00:19:30.792 00:19:30.792 ' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:30.792 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.361 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:37.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:37.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:37.362 Found net devices under 0000:86:00.0: cvl_0_0 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:37.362 Found net devices under 0000:86:00.1: cvl_0_1 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:19:37.362 00:19:37.362 --- 10.0.0.2 ping statistics --- 00:19:37.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.362 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:19:37.362 00:19:37.362 --- 10.0.0.1 ping statistics --- 00:19:37.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.362 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3524212 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3524212 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3524212 ']' 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.362 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.362 [2024-11-20 10:36:37.509538] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:19:37.363 [2024-11-20 10:36:37.509580] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.363 [2024-11-20 10:36:37.590526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.363 [2024-11-20 10:36:37.632667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.363 [2024-11-20 10:36:37.632705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.363 [2024-11-20 10:36:37.632712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.363 [2024-11-20 10:36:37.632719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.363 [2024-11-20 10:36:37.632724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.363 [2024-11-20 10:36:37.633292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 Malloc0 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 [2024-11-20 10:36:37.815414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 [2024-11-20 10:36:37.843605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.363 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.363 [2024-11-20 10:36:37.932023] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:38.738 Initializing NVMe Controllers 00:19:38.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:38.738 Initialization complete. Launching workers. 00:19:38.738 ======================================================== 00:19:38.738 Latency(us) 00:19:38.738 Device Information : IOPS MiB/s Average min max 00:19:38.738 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 166182.45 7246.59 194465.97 00:19:38.738 ======================================================== 00:19:38.738 Total : 25.00 3.12 166182.45 7246.59 194465.97 00:19:38.738 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.997 rmmod nvme_tcp 00:19:38.997 rmmod nvme_fabrics 00:19:38.997 rmmod nvme_keyring 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3524212 ']' 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3524212 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3524212 ']' 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3524212 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3524212 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3524212' 00:19:38.997 killing process with pid 3524212 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3524212 00:19:38.997 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3524212 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.255 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.159 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.160 00:19:41.160 real 0m10.579s 00:19:41.160 user 0m4.094s 00:19:41.160 sys 0m4.941s 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:41.160 ************************************ 00:19:41.160 END TEST nvmf_wait_for_buf 00:19:41.160 ************************************ 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.160 10:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.730 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.731 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.731 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.731 ************************************ 00:19:47.731 START TEST nvmf_perf_adq 00:19:47.731 ************************************ 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:47.731 * Looking for test storage... 00:19:47.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.731 --rc genhtml_branch_coverage=1 00:19:47.731 --rc genhtml_function_coverage=1 00:19:47.731 --rc genhtml_legend=1 00:19:47.731 --rc geninfo_all_blocks=1 00:19:47.731 --rc geninfo_unexecuted_blocks=1 00:19:47.731 00:19:47.731 ' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.731 --rc genhtml_branch_coverage=1 00:19:47.731 --rc genhtml_function_coverage=1 00:19:47.731 --rc genhtml_legend=1 00:19:47.731 --rc geninfo_all_blocks=1 00:19:47.731 --rc geninfo_unexecuted_blocks=1 00:19:47.731 00:19:47.731 ' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.731 --rc genhtml_branch_coverage=1 00:19:47.731 --rc genhtml_function_coverage=1 00:19:47.731 --rc genhtml_legend=1 00:19:47.731 --rc geninfo_all_blocks=1 00:19:47.731 --rc geninfo_unexecuted_blocks=1 00:19:47.731 00:19:47.731 ' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.731 --rc genhtml_branch_coverage=1 00:19:47.731 --rc genhtml_function_coverage=1 00:19:47.731 --rc genhtml_legend=1 00:19:47.731 --rc geninfo_all_blocks=1 00:19:47.731 --rc geninfo_unexecuted_blocks=1 00:19:47.731 00:19:47.731 ' 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.731 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.732 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:53.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:53.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.005 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:53.006 Found net devices under 0000:86:00.0: cvl_0_0 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:53.006 Found net devices under 0000:86:00.1: cvl_0_1 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:53.006 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:53.942 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:55.865 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.188 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.188 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.189 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:01.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:20:01.189 00:20:01.189 --- 10.0.0.2 ping statistics --- 00:20:01.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.189 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:20:01.189 00:20:01.189 --- 10.0.0.1 ping statistics --- 00:20:01.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.189 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3532558 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3532558 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3532558 ']' 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.189 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.189 [2024-11-20 10:37:01.848184] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:01.189 [2024-11-20 10:37:01.848236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.504 [2024-11-20 10:37:01.930790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.504 [2024-11-20 10:37:01.977220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.504 [2024-11-20 10:37:01.977256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.504 [2024-11-20 10:37:01.977264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.504 [2024-11-20 10:37:01.977270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.504 [2024-11-20 10:37:01.977275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.504 [2024-11-20 10:37:01.978691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.504 [2024-11-20 10:37:01.978803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.504 [2024-11-20 10:37:01.978910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.504 [2024-11-20 10:37:01.978911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.504 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.504 [2024-11-20 10:37:02.176698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.504 Malloc1 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.504 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.762 [2024-11-20 10:37:02.246305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3532663 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:01.762 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:03.666 "tick_rate": 2300000000, 00:20:03.666 "poll_groups": [ 00:20:03.666 { 00:20:03.666 "name": "nvmf_tgt_poll_group_000", 00:20:03.666 "admin_qpairs": 1, 00:20:03.666 "io_qpairs": 1, 00:20:03.666 "current_admin_qpairs": 1, 00:20:03.666 "current_io_qpairs": 1, 00:20:03.666 "pending_bdev_io": 0, 00:20:03.666 "completed_nvme_io": 19822, 00:20:03.666 "transports": [ 00:20:03.666 { 00:20:03.666 "trtype": "TCP" 00:20:03.666 } 00:20:03.666 ] 00:20:03.666 }, 00:20:03.666 { 00:20:03.666 "name": "nvmf_tgt_poll_group_001", 00:20:03.666 "admin_qpairs": 0, 00:20:03.666 "io_qpairs": 1, 00:20:03.666 "current_admin_qpairs": 0, 00:20:03.666 "current_io_qpairs": 1, 00:20:03.666 "pending_bdev_io": 0, 00:20:03.666 "completed_nvme_io": 20161, 00:20:03.666 "transports": [ 00:20:03.666 { 00:20:03.666 "trtype": "TCP" 00:20:03.666 } 00:20:03.666 ] 00:20:03.666 }, 00:20:03.666 { 00:20:03.666 "name": "nvmf_tgt_poll_group_002", 00:20:03.666 "admin_qpairs": 0, 00:20:03.666 "io_qpairs": 1, 00:20:03.666 "current_admin_qpairs": 0, 00:20:03.666 "current_io_qpairs": 1, 00:20:03.666 "pending_bdev_io": 0, 00:20:03.666 "completed_nvme_io": 19777, 00:20:03.666 "transports": [ 00:20:03.666 { 00:20:03.666 "trtype": "TCP" 00:20:03.666 } 00:20:03.666 ] 00:20:03.666 }, 00:20:03.666 { 00:20:03.666 "name": "nvmf_tgt_poll_group_003", 00:20:03.666 "admin_qpairs": 0, 00:20:03.666 "io_qpairs": 1, 00:20:03.666 "current_admin_qpairs": 0, 00:20:03.666 "current_io_qpairs": 1, 00:20:03.666 "pending_bdev_io": 0, 00:20:03.666 "completed_nvme_io": 19788, 00:20:03.666 "transports": [ 00:20:03.666 { 00:20:03.666 "trtype": "TCP" 00:20:03.666 } 00:20:03.666 ] 00:20:03.666 } 00:20:03.666 ] 00:20:03.666 }' 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:03.666 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3532663 00:20:11.785 Initializing NVMe Controllers 00:20:11.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:11.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:11.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:11.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:11.785 Initialization complete. Launching workers. 00:20:11.785 ======================================================== 00:20:11.785 Latency(us) 00:20:11.785 Device Information : IOPS MiB/s Average min max 00:20:11.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10506.40 41.04 6093.20 1814.61 10513.74 00:20:11.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10607.50 41.44 6033.15 2123.87 10674.26 00:20:11.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10577.00 41.32 6052.51 2321.76 10486.81 00:20:11.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10576.20 41.31 6052.08 2232.58 10172.43 00:20:11.785 ======================================================== 00:20:11.785 Total : 42267.10 165.11 6057.66 1814.61 10674.26 00:20:11.785 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.785 rmmod nvme_tcp 00:20:11.785 rmmod nvme_fabrics 00:20:11.785 rmmod nvme_keyring 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3532558 ']' 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3532558 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3532558 ']' 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3532558 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.785 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3532558 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3532558' 00:20:12.044 killing process with pid 3532558 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3532558 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3532558 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.044 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.578 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.578 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:14.578 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:14.578 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:15.145 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:17.045 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.311 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.311 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.311 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.311 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.312 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:20:22.312 00:20:22.312 --- 10.0.0.2 ping statistics --- 00:20:22.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.312 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:22.312 00:20:22.312 --- 10.0.0.1 ping statistics --- 00:20:22.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.312 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.312 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:22.570 net.core.busy_poll = 1 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:22.570 net.core.busy_read = 1 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:22.570 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3536447 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3536447 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3536447 ']' 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.827 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.827 [2024-11-20 10:37:23.372901] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:22.827 [2024-11-20 10:37:23.372951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.827 [2024-11-20 10:37:23.451039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.827 [2024-11-20 10:37:23.493181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.827 [2024-11-20 10:37:23.493218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.827 [2024-11-20 10:37:23.493225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.827 [2024-11-20 10:37:23.493231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.827 [2024-11-20 10:37:23.493236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.827 [2024-11-20 10:37:23.494801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.827 [2024-11-20 10:37:23.494911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.827 [2024-11-20 10:37:23.495025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.828 [2024-11-20 10:37:23.495026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 [2024-11-20 10:37:24.374599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 Malloc1 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 [2024-11-20 10:37:24.439476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3536604 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:23.761 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:26.290 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:26.290 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.290 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.290 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.290 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:26.290 "tick_rate": 2300000000, 00:20:26.290 "poll_groups": [ 00:20:26.290 { 00:20:26.290 "name": "nvmf_tgt_poll_group_000", 00:20:26.290 "admin_qpairs": 1, 00:20:26.290 "io_qpairs": 3, 00:20:26.290 "current_admin_qpairs": 1, 00:20:26.290 "current_io_qpairs": 3, 00:20:26.290 "pending_bdev_io": 0, 00:20:26.290 "completed_nvme_io": 29210, 00:20:26.290 "transports": [ 00:20:26.290 { 00:20:26.290 "trtype": "TCP" 00:20:26.290 } 00:20:26.290 ] 00:20:26.290 }, 00:20:26.290 { 00:20:26.290 "name": "nvmf_tgt_poll_group_001", 00:20:26.290 "admin_qpairs": 0, 00:20:26.290 "io_qpairs": 1, 00:20:26.290 "current_admin_qpairs": 0, 00:20:26.290 "current_io_qpairs": 1, 00:20:26.290 "pending_bdev_io": 0, 00:20:26.290 "completed_nvme_io": 27032, 00:20:26.290 "transports": [ 00:20:26.290 { 00:20:26.290 "trtype": "TCP" 00:20:26.290 } 00:20:26.290 ] 00:20:26.290 }, 00:20:26.290 { 00:20:26.290 "name": "nvmf_tgt_poll_group_002", 00:20:26.290 "admin_qpairs": 0, 00:20:26.290 "io_qpairs": 0, 00:20:26.290 "current_admin_qpairs": 0, 00:20:26.290 "current_io_qpairs": 0, 00:20:26.290 "pending_bdev_io": 0, 00:20:26.290 "completed_nvme_io": 0, 00:20:26.290 "transports": [ 00:20:26.290 { 00:20:26.290 "trtype": "TCP" 00:20:26.290 } 00:20:26.290 ] 00:20:26.290 }, 00:20:26.290 { 00:20:26.290 "name": "nvmf_tgt_poll_group_003", 00:20:26.290 "admin_qpairs": 0, 00:20:26.290 "io_qpairs": 0, 00:20:26.290 "current_admin_qpairs": 0, 00:20:26.290 "current_io_qpairs": 0, 00:20:26.290 "pending_bdev_io": 0, 00:20:26.290 "completed_nvme_io": 0, 00:20:26.290 "transports": [ 00:20:26.290 { 00:20:26.290 "trtype": "TCP" 00:20:26.290 } 00:20:26.290 ] 00:20:26.290 } 00:20:26.290 ] 00:20:26.290 }' 00:20:26.291 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:26.291 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:26.291 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:26.291 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:26.291 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3536604 00:20:34.399 Initializing NVMe Controllers 00:20:34.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:34.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:34.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:34.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:34.399 Initialization complete. Launching workers. 00:20:34.399 ======================================================== 00:20:34.399 Latency(us) 00:20:34.399 Device Information : IOPS MiB/s Average min max 00:20:34.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4512.80 17.63 14185.19 1857.23 58845.53 00:20:34.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5015.30 19.59 12788.09 1898.08 58069.82 00:20:34.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14815.40 57.87 4319.24 1586.38 45593.89 00:20:34.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5652.00 22.08 11326.22 1419.67 58153.66 00:20:34.399 ======================================================== 00:20:34.399 Total : 29995.50 117.17 8539.88 1419.67 58845.53 00:20:34.399 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.399 rmmod nvme_tcp 00:20:34.399 rmmod nvme_fabrics 00:20:34.399 rmmod nvme_keyring 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3536447 ']' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3536447 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3536447 ']' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3536447 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3536447 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3536447' 00:20:34.399 killing process with pid 3536447 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3536447 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3536447 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.399 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.303 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.303 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:36.303 00:20:36.303 real 0m49.438s 00:20:36.303 user 2m46.532s 00:20:36.303 sys 0m10.524s 00:20:36.303 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.303 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.303 ************************************ 00:20:36.303 END TEST nvmf_perf_adq 00:20:36.303 ************************************ 00:20:36.303 10:37:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:36.303 10:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.303 10:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.303 10:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.562 ************************************ 00:20:36.562 START TEST nvmf_shutdown 00:20:36.562 ************************************ 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:36.562 * Looking for test storage... 00:20:36.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.562 --rc genhtml_branch_coverage=1 00:20:36.562 --rc genhtml_function_coverage=1 00:20:36.562 --rc genhtml_legend=1 00:20:36.562 --rc geninfo_all_blocks=1 00:20:36.562 --rc geninfo_unexecuted_blocks=1 00:20:36.562 00:20:36.562 ' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.562 --rc genhtml_branch_coverage=1 00:20:36.562 --rc genhtml_function_coverage=1 00:20:36.562 --rc genhtml_legend=1 00:20:36.562 --rc geninfo_all_blocks=1 00:20:36.562 --rc geninfo_unexecuted_blocks=1 00:20:36.562 00:20:36.562 ' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.562 --rc genhtml_branch_coverage=1 00:20:36.562 --rc genhtml_function_coverage=1 00:20:36.562 --rc genhtml_legend=1 00:20:36.562 --rc geninfo_all_blocks=1 00:20:36.562 --rc geninfo_unexecuted_blocks=1 00:20:36.562 00:20:36.562 ' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.562 --rc genhtml_branch_coverage=1 00:20:36.562 --rc genhtml_function_coverage=1 00:20:36.562 --rc genhtml_legend=1 00:20:36.562 --rc geninfo_all_blocks=1 00:20:36.562 --rc geninfo_unexecuted_blocks=1 00:20:36.562 00:20:36.562 ' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.562 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.563 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:36.822 ************************************ 00:20:36.822 START TEST nvmf_shutdown_tc1 00:20:36.822 ************************************ 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.822 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.387 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.387 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.387 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:20:43.388 00:20:43.388 --- 10.0.0.2 ping statistics --- 00:20:43.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.388 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:43.388 00:20:43.388 --- 10.0.0.1 ping statistics --- 00:20:43.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.388 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3541925 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3541925 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3541925 ']' 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.388 [2024-11-20 10:37:43.417008] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:43.388 [2024-11-20 10:37:43.417059] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.388 [2024-11-20 10:37:43.498604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.388 [2024-11-20 10:37:43.541178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.388 [2024-11-20 10:37:43.541215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.388 [2024-11-20 10:37:43.541223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.388 [2024-11-20 10:37:43.541229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.388 [2024-11-20 10:37:43.541234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.388 [2024-11-20 10:37:43.542878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.388 [2024-11-20 10:37:43.542987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.388 [2024-11-20 10:37:43.543092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.388 [2024-11-20 10:37:43.543093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.388 [2024-11-20 10:37:43.684359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.388 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.388 Malloc1 00:20:43.389 [2024-11-20 10:37:43.804937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.389 Malloc2 00:20:43.389 Malloc3 00:20:43.389 Malloc4 00:20:43.389 Malloc5 00:20:43.389 Malloc6 00:20:43.389 Malloc7 00:20:43.389 Malloc8 00:20:43.648 Malloc9 00:20:43.648 Malloc10 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3542026 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3542026 /var/tmp/bdevperf.sock 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3542026 ']' 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.648 { 00:20:43.648 "params": { 00:20:43.648 "name": "Nvme$subsystem", 00:20:43.648 "trtype": "$TEST_TRANSPORT", 00:20:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.648 "adrfam": "ipv4", 00:20:43.648 "trsvcid": "$NVMF_PORT", 00:20:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.648 "hdgst": ${hdgst:-false}, 00:20:43.648 "ddgst": ${ddgst:-false} 00:20:43.648 }, 00:20:43.648 "method": "bdev_nvme_attach_controller" 00:20:43.648 } 00:20:43.648 EOF 00:20:43.648 )") 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.648 { 00:20:43.648 "params": { 00:20:43.648 "name": "Nvme$subsystem", 00:20:43.648 "trtype": "$TEST_TRANSPORT", 00:20:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.648 "adrfam": "ipv4", 00:20:43.648 "trsvcid": "$NVMF_PORT", 00:20:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.648 "hdgst": ${hdgst:-false}, 00:20:43.648 "ddgst": ${ddgst:-false} 00:20:43.648 }, 00:20:43.648 "method": "bdev_nvme_attach_controller" 00:20:43.648 } 00:20:43.648 EOF 00:20:43.648 )") 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.648 { 00:20:43.648 "params": { 00:20:43.648 "name": "Nvme$subsystem", 00:20:43.648 "trtype": "$TEST_TRANSPORT", 00:20:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.648 "adrfam": "ipv4", 00:20:43.648 "trsvcid": "$NVMF_PORT", 00:20:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.648 "hdgst": ${hdgst:-false}, 00:20:43.648 "ddgst": ${ddgst:-false} 00:20:43.648 }, 00:20:43.648 "method": "bdev_nvme_attach_controller" 00:20:43.648 } 00:20:43.648 EOF 00:20:43.648 )") 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.648 { 00:20:43.648 "params": { 00:20:43.648 "name": "Nvme$subsystem", 00:20:43.648 "trtype": "$TEST_TRANSPORT", 00:20:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.648 "adrfam": "ipv4", 00:20:43.648 "trsvcid": "$NVMF_PORT", 00:20:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.648 "hdgst": ${hdgst:-false}, 00:20:43.648 "ddgst": ${ddgst:-false} 00:20:43.648 }, 00:20:43.648 "method": "bdev_nvme_attach_controller" 00:20:43.648 } 00:20:43.648 EOF 00:20:43.648 )") 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.648 { 00:20:43.648 "params": { 00:20:43.648 "name": "Nvme$subsystem", 00:20:43.648 "trtype": "$TEST_TRANSPORT", 00:20:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.648 "adrfam": "ipv4", 00:20:43.648 "trsvcid": "$NVMF_PORT", 00:20:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.648 "hdgst": ${hdgst:-false}, 00:20:43.648 "ddgst": ${ddgst:-false} 00:20:43.648 }, 00:20:43.648 "method": "bdev_nvme_attach_controller" 00:20:43.648 } 00:20:43.648 EOF 00:20:43.648 )") 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.648 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.648 { 00:20:43.648 "params": { 00:20:43.649 "name": "Nvme$subsystem", 00:20:43.649 "trtype": "$TEST_TRANSPORT", 00:20:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "$NVMF_PORT", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.649 "hdgst": ${hdgst:-false}, 00:20:43.649 "ddgst": ${ddgst:-false} 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 } 00:20:43.649 EOF 00:20:43.649 )") 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.649 { 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme$subsystem", 00:20:43.649 "trtype": "$TEST_TRANSPORT", 00:20:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "$NVMF_PORT", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.649 "hdgst": ${hdgst:-false}, 00:20:43.649 "ddgst": ${ddgst:-false} 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 } 00:20:43.649 EOF 00:20:43.649 )") 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.649 [2024-11-20 10:37:44.279963] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:43.649 [2024-11-20 10:37:44.280012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.649 { 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme$subsystem", 00:20:43.649 "trtype": "$TEST_TRANSPORT", 00:20:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "$NVMF_PORT", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.649 "hdgst": ${hdgst:-false}, 00:20:43.649 "ddgst": ${ddgst:-false} 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 } 00:20:43.649 EOF 00:20:43.649 )") 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.649 { 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme$subsystem", 00:20:43.649 "trtype": "$TEST_TRANSPORT", 00:20:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "$NVMF_PORT", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.649 "hdgst": ${hdgst:-false}, 00:20:43.649 "ddgst": ${ddgst:-false} 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 } 00:20:43.649 EOF 00:20:43.649 )") 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.649 { 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme$subsystem", 00:20:43.649 "trtype": "$TEST_TRANSPORT", 00:20:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "$NVMF_PORT", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.649 "hdgst": ${hdgst:-false}, 00:20:43.649 "ddgst": ${ddgst:-false} 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 } 00:20:43.649 EOF 00:20:43.649 )") 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:43.649 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme1", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme2", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme3", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme4", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme5", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme6", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme7", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme8", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:43.649 "hdgst": false, 00:20:43.649 "ddgst": false 00:20:43.649 }, 00:20:43.649 "method": "bdev_nvme_attach_controller" 00:20:43.649 },{ 00:20:43.649 "params": { 00:20:43.649 "name": "Nvme9", 00:20:43.649 "trtype": "tcp", 00:20:43.649 "traddr": "10.0.0.2", 00:20:43.649 "adrfam": "ipv4", 00:20:43.649 "trsvcid": "4420", 00:20:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:43.649 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:43.650 "hdgst": false, 00:20:43.650 "ddgst": false 00:20:43.650 }, 00:20:43.650 "method": "bdev_nvme_attach_controller" 00:20:43.650 },{ 00:20:43.650 "params": { 00:20:43.650 "name": "Nvme10", 00:20:43.650 "trtype": "tcp", 00:20:43.650 "traddr": "10.0.0.2", 00:20:43.650 "adrfam": "ipv4", 00:20:43.650 "trsvcid": "4420", 00:20:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:43.650 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:43.650 "hdgst": false, 00:20:43.650 "ddgst": false 00:20:43.650 }, 00:20:43.650 "method": "bdev_nvme_attach_controller" 00:20:43.650 }' 00:20:43.650 [2024-11-20 10:37:44.358679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.907 [2024-11-20 10:37:44.401432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3542026 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:45.807 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:46.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3542026 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:46.741 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3541925 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 [2024-11-20 10:37:47.216871] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:46.742 [2024-11-20 10:37:47.216919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542577 ] 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.742 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.742 { 00:20:46.742 "params": { 00:20:46.742 "name": "Nvme$subsystem", 00:20:46.742 "trtype": "$TEST_TRANSPORT", 00:20:46.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.742 "adrfam": "ipv4", 00:20:46.742 "trsvcid": "$NVMF_PORT", 00:20:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.742 "hdgst": ${hdgst:-false}, 00:20:46.742 "ddgst": ${ddgst:-false} 00:20:46.742 }, 00:20:46.742 "method": "bdev_nvme_attach_controller" 00:20:46.742 } 00:20:46.742 EOF 00:20:46.742 )") 00:20:46.743 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.743 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:46.743 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:46.743 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme1", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme2", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme3", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme4", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme5", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme6", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme7", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme8", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme9", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 },{ 00:20:46.743 "params": { 00:20:46.743 "name": "Nvme10", 00:20:46.743 "trtype": "tcp", 00:20:46.743 "traddr": "10.0.0.2", 00:20:46.743 "adrfam": "ipv4", 00:20:46.743 "trsvcid": "4420", 00:20:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.743 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.743 "hdgst": false, 00:20:46.743 "ddgst": false 00:20:46.743 }, 00:20:46.743 "method": "bdev_nvme_attach_controller" 00:20:46.743 }' 00:20:46.743 [2024-11-20 10:37:47.294470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.743 [2024-11-20 10:37:47.337147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.117 Running I/O for 1 seconds... 00:20:49.489 2190.00 IOPS, 136.88 MiB/s 00:20:49.489 Latency(us) 00:20:49.489 [2024-11-20T09:37:50.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme1n1 : 1.06 245.09 15.32 0.00 0.00 257666.40 3818.18 232510.33 00:20:49.490 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme2n1 : 1.14 285.29 17.83 0.00 0.00 217904.93 7123.48 208803.39 00:20:49.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme3n1 : 1.13 286.73 17.92 0.00 0.00 207853.60 15158.76 215186.03 00:20:49.490 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme4n1 : 1.15 284.50 17.78 0.00 0.00 212797.26 3761.20 235245.75 00:20:49.490 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme5n1 : 1.09 235.66 14.73 0.00 0.00 253013.70 19375.86 226127.69 00:20:49.490 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme6n1 : 1.16 275.90 17.24 0.00 0.00 213884.39 18805.98 218833.25 00:20:49.490 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme7n1 : 1.15 278.36 17.40 0.00 0.00 208660.03 15842.62 222480.47 00:20:49.490 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme8n1 : 1.16 277.04 17.32 0.00 0.00 206567.33 13392.14 229774.91 00:20:49.490 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme9n1 : 1.16 275.04 17.19 0.00 0.00 205095.00 11397.57 246187.41 00:20:49.490 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.490 Verification LBA range: start 0x0 length 0x400 00:20:49.490 Nvme10n1 : 1.17 274.63 17.16 0.00 0.00 202240.71 17096.35 219745.06 00:20:49.490 [2024-11-20T09:37:50.221Z] =================================================================================================================== 00:20:49.490 [2024-11-20T09:37:50.221Z] Total : 2718.25 169.89 0.00 0.00 217070.32 3761.20 246187.41 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.490 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.490 rmmod nvme_tcp 00:20:49.490 rmmod nvme_fabrics 00:20:49.490 rmmod nvme_keyring 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3541925 ']' 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3541925 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3541925 ']' 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3541925 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3541925 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3541925' 00:20:49.748 killing process with pid 3541925 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3541925 00:20:49.748 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3541925 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.007 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.540 00:20:52.540 real 0m15.410s 00:20:52.540 user 0m34.358s 00:20:52.540 sys 0m5.862s 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.540 ************************************ 00:20:52.540 END TEST nvmf_shutdown_tc1 00:20:52.540 ************************************ 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.540 ************************************ 00:20:52.540 START TEST nvmf_shutdown_tc2 00:20:52.540 ************************************ 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.540 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.541 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.541 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.541 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.541 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.541 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:20:52.541 00:20:52.541 --- 10.0.0.2 ping statistics --- 00:20:52.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.541 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:20:52.541 00:20:52.541 --- 10.0.0.1 ping statistics --- 00:20:52.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.541 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3543709 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3543709 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3543709 ']' 00:20:52.541 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.542 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.542 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.542 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.542 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.542 [2024-11-20 10:37:53.161159] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:52.542 [2024-11-20 10:37:53.161216] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.542 [2024-11-20 10:37:53.241256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.801 [2024-11-20 10:37:53.285437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.801 [2024-11-20 10:37:53.285473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.801 [2024-11-20 10:37:53.285480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.801 [2024-11-20 10:37:53.285486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.801 [2024-11-20 10:37:53.285491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.801 [2024-11-20 10:37:53.286989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.801 [2024-11-20 10:37:53.287095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.801 [2024-11-20 10:37:53.287202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.801 [2024-11-20 10:37:53.287203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.801 [2024-11-20 10:37:53.424343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.801 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.801 Malloc1 00:20:53.060 [2024-11-20 10:37:53.544498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.060 Malloc2 00:20:53.060 Malloc3 00:20:53.060 Malloc4 00:20:53.060 Malloc5 00:20:53.060 Malloc6 00:20:53.060 Malloc7 00:20:53.318 Malloc8 00:20:53.318 Malloc9 00:20:53.318 Malloc10 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3543770 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3543770 /var/tmp/bdevperf.sock 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3543770 ']' 00:20:53.318 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 [2024-11-20 10:37:54.017746] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:20:53.319 [2024-11-20 10:37:54.017793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543770 ] 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.319 { 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme$subsystem", 00:20:53.319 "trtype": "$TEST_TRANSPORT", 00:20:53.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "$NVMF_PORT", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.319 "hdgst": ${hdgst:-false}, 00:20:53.319 "ddgst": ${ddgst:-false} 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 } 00:20:53.319 EOF 00:20:53.319 )") 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:53.319 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme1", 00:20:53.319 "trtype": "tcp", 00:20:53.319 "traddr": "10.0.0.2", 00:20:53.319 "adrfam": "ipv4", 00:20:53.319 "trsvcid": "4420", 00:20:53.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.319 "hdgst": false, 00:20:53.319 "ddgst": false 00:20:53.319 }, 00:20:53.319 "method": "bdev_nvme_attach_controller" 00:20:53.319 },{ 00:20:53.319 "params": { 00:20:53.319 "name": "Nvme2", 00:20:53.319 "trtype": "tcp", 00:20:53.319 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme3", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme4", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme5", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme6", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme7", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme8", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme9", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 },{ 00:20:53.320 "params": { 00:20:53.320 "name": "Nvme10", 00:20:53.320 "trtype": "tcp", 00:20:53.320 "traddr": "10.0.0.2", 00:20:53.320 "adrfam": "ipv4", 00:20:53.320 "trsvcid": "4420", 00:20:53.320 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.320 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.320 "hdgst": false, 00:20:53.320 "ddgst": false 00:20:53.320 }, 00:20:53.320 "method": "bdev_nvme_attach_controller" 00:20:53.320 }' 00:20:53.578 [2024-11-20 10:37:54.096680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.578 [2024-11-20 10:37:54.139169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.953 Running I/O for 10 seconds... 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.211 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.469 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.469 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:55.469 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:55.469 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:55.727 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3543770 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3543770 ']' 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3543770 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543770 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543770' 00:20:55.985 killing process with pid 3543770 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3543770 00:20:55.985 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3543770 00:20:55.985 Received shutdown signal, test time was about 0.990856 seconds 00:20:55.985 00:20:55.985 Latency(us) 00:20:55.985 [2024-11-20T09:37:56.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme1n1 : 0.98 264.54 16.53 0.00 0.00 238308.07 4701.50 220656.86 00:20:55.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme2n1 : 0.97 282.96 17.69 0.00 0.00 216977.83 7921.31 209715.20 00:20:55.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme3n1 : 0.96 271.56 16.97 0.00 0.00 223844.45 7522.39 213362.42 00:20:55.986 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme4n1 : 0.98 330.03 20.63 0.00 0.00 181827.33 5641.79 185096.46 00:20:55.986 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme5n1 : 0.97 265.05 16.57 0.00 0.00 222975.11 15842.62 220656.86 00:20:55.986 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme6n1 : 0.98 260.74 16.30 0.00 0.00 222691.95 18350.08 223392.28 00:20:55.986 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme7n1 : 0.96 268.01 16.75 0.00 0.00 212300.80 21427.42 217921.45 00:20:55.986 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme8n1 : 0.97 267.08 16.69 0.00 0.00 208967.02 4217.10 221568.67 00:20:55.986 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme9n1 : 0.99 259.13 16.20 0.00 0.00 212530.75 18805.98 233422.14 00:20:55.986 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.986 Verification LBA range: start 0x0 length 0x400 00:20:55.986 Nvme10n1 : 0.99 258.54 16.16 0.00 0.00 209080.54 18122.13 242540.19 00:20:55.986 [2024-11-20T09:37:56.717Z] =================================================================================================================== 00:20:55.986 [2024-11-20T09:37:56.717Z] Total : 2727.64 170.48 0.00 0.00 214145.16 4217.10 242540.19 00:20:56.244 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3543709 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.179 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.179 rmmod nvme_tcp 00:20:57.179 rmmod nvme_fabrics 00:20:57.437 rmmod nvme_keyring 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3543709 ']' 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3543709 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3543709 ']' 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3543709 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543709 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543709' 00:20:57.438 killing process with pid 3543709 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3543709 00:20:57.438 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3543709 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.696 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.232 00:21:00.232 real 0m7.626s 00:21:00.232 user 0m22.940s 00:21:00.232 sys 0m1.395s 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.232 ************************************ 00:21:00.232 END TEST nvmf_shutdown_tc2 00:21:00.232 ************************************ 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.232 ************************************ 00:21:00.232 START TEST nvmf_shutdown_tc3 00:21:00.232 ************************************ 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.232 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.233 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.233 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:21:00.233 00:21:00.233 --- 10.0.0.2 ping statistics --- 00:21:00.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.233 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:21:00.233 00:21:00.233 --- 10.0.0.1 ping statistics --- 00:21:00.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.233 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3545039 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3545039 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3545039 ']' 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.233 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.233 [2024-11-20 10:38:00.858022] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:00.233 [2024-11-20 10:38:00.858074] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.233 [2024-11-20 10:38:00.935587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.491 [2024-11-20 10:38:00.979856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.491 [2024-11-20 10:38:00.979889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.491 [2024-11-20 10:38:00.979897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.491 [2024-11-20 10:38:00.979903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.491 [2024-11-20 10:38:00.979908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.491 [2024-11-20 10:38:00.981410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.491 [2024-11-20 10:38:00.981516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.491 [2024-11-20 10:38:00.981622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.491 [2024-11-20 10:38:00.981623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.491 [2024-11-20 10:38:01.118799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.491 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.492 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.492 Malloc1 00:21:00.749 [2024-11-20 10:38:01.235654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.749 Malloc2 00:21:00.749 Malloc3 00:21:00.749 Malloc4 00:21:00.749 Malloc5 00:21:00.749 Malloc6 00:21:00.749 Malloc7 00:21:01.006 Malloc8 00:21:01.006 Malloc9 00:21:01.006 Malloc10 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3545313 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3545313 /var/tmp/bdevperf.sock 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3545313 ']' 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.006 { 00:21:01.006 "params": { 00:21:01.006 "name": "Nvme$subsystem", 00:21:01.006 "trtype": "$TEST_TRANSPORT", 00:21:01.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.006 "adrfam": "ipv4", 00:21:01.006 "trsvcid": "$NVMF_PORT", 00:21:01.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.006 "hdgst": ${hdgst:-false}, 00:21:01.006 "ddgst": ${ddgst:-false} 00:21:01.006 }, 00:21:01.006 "method": "bdev_nvme_attach_controller" 00:21:01.006 } 00:21:01.006 EOF 00:21:01.006 )") 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.006 { 00:21:01.006 "params": { 00:21:01.006 "name": "Nvme$subsystem", 00:21:01.006 "trtype": "$TEST_TRANSPORT", 00:21:01.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.006 "adrfam": "ipv4", 00:21:01.006 "trsvcid": "$NVMF_PORT", 00:21:01.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.006 "hdgst": ${hdgst:-false}, 00:21:01.006 "ddgst": ${ddgst:-false} 00:21:01.006 }, 00:21:01.006 "method": "bdev_nvme_attach_controller" 00:21:01.006 } 00:21:01.006 EOF 00:21:01.006 )") 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.006 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.006 { 00:21:01.006 "params": { 00:21:01.006 "name": "Nvme$subsystem", 00:21:01.006 "trtype": "$TEST_TRANSPORT", 00:21:01.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.006 "adrfam": "ipv4", 00:21:01.006 "trsvcid": "$NVMF_PORT", 00:21:01.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.006 "hdgst": ${hdgst:-false}, 00:21:01.006 "ddgst": ${ddgst:-false} 00:21:01.006 }, 00:21:01.006 "method": "bdev_nvme_attach_controller" 00:21:01.006 } 00:21:01.006 EOF 00:21:01.006 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 [2024-11-20 10:38:01.714212] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:01.007 [2024-11-20 10:38:01.714260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545313 ] 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.007 { 00:21:01.007 "params": { 00:21:01.007 "name": "Nvme$subsystem", 00:21:01.007 "trtype": "$TEST_TRANSPORT", 00:21:01.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.007 "adrfam": "ipv4", 00:21:01.007 "trsvcid": "$NVMF_PORT", 00:21:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.007 "hdgst": ${hdgst:-false}, 00:21:01.007 "ddgst": ${ddgst:-false} 00:21:01.007 }, 00:21:01.007 "method": "bdev_nvme_attach_controller" 00:21:01.007 } 00:21:01.007 EOF 00:21:01.007 )") 00:21:01.007 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.264 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:01.264 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:01.264 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme1", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme2", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme3", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme4", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme5", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme6", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme7", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme8", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme9", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 },{ 00:21:01.264 "params": { 00:21:01.264 "name": "Nvme10", 00:21:01.264 "trtype": "tcp", 00:21:01.264 "traddr": "10.0.0.2", 00:21:01.264 "adrfam": "ipv4", 00:21:01.264 "trsvcid": "4420", 00:21:01.264 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.264 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.264 "hdgst": false, 00:21:01.264 "ddgst": false 00:21:01.264 }, 00:21:01.264 "method": "bdev_nvme_attach_controller" 00:21:01.264 }' 00:21:01.264 [2024-11-20 10:38:01.792654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.264 [2024-11-20 10:38:01.834492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.212 Running I/O for 10 seconds... 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:03.212 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=85 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 85 -ge 100 ']' 00:21:03.482 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3545039 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3545039 ']' 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3545039 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.740 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3545039 00:21:04.012 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.012 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.012 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3545039' 00:21:04.012 killing process with pid 3545039 00:21:04.012 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3545039 00:21:04.012 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3545039 00:21:04.012 [2024-11-20 10:38:04.478791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.012 [2024-11-20 10:38:04.478891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.478996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.479255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5700 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.480385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8180 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.013 [2024-11-20 10:38:04.482364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.482663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f60c0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.014 [2024-11-20 10:38:04.483960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.483966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.483972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.483979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.483986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.483992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.483998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.484129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65b0 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.015 [2024-11-20 10:38:04.485773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.485780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.485786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.485792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.485798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.485806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.485813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e00 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.486993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.487229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72d0 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.016 [2024-11-20 10:38:04.488625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.488952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c90 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.490496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3640 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.490609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.017 [2024-11-20 10:38:04.490660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.017 [2024-11-20 10:38:04.490667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e370 is same with the state(6) to be set 00:21:04.017 [2024-11-20 10:38:04.490693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c02610 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.490783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b9b0 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.490865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119a90 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.490953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.490986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.490993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e8c0 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.491044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214ba60 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.491125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedd50 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.491211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cee1b0 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.491292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.018 [2024-11-20 10:38:04.491344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.491351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146460 is same with the state(6) to be set 00:21:04.018 [2024-11-20 10:38:04.492012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.018 [2024-11-20 10:38:04.492035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.492049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.018 [2024-11-20 10:38:04.492057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.018 [2024-11-20 10:38:04.492067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.018 [2024-11-20 10:38:04.492074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.019 [2024-11-20 10:38:04.492552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.019 [2024-11-20 10:38:04.492560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.492987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.492995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.020 [2024-11-20 10:38:04.493590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.020 [2024-11-20 10:38:04.493722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.020 [2024-11-20 10:38:04.493729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.493988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.493996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.494004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.494011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.494019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.494028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.494036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.494043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.494051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.494058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.494066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.021 [2024-11-20 10:38:04.504395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.021 [2024-11-20 10:38:04.504405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.504692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.504732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.022 [2024-11-20 10:38:04.505086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3640 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e370 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c02610 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b9b0 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119a90 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e8c0 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214ba60 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedd50 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cee1b0 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2146460 (9): Bad file descriptor 00:21:04.022 [2024-11-20 10:38:04.505339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.022 [2024-11-20 10:38:04.505600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.022 [2024-11-20 10:38:04.505608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.505984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.505991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.023 [2024-11-20 10:38:04.506233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.023 [2024-11-20 10:38:04.506243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.506391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.506400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-11-20 10:38:04.509777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.024 [2024-11-20 10:38:04.509788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.509986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.509996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-11-20 10:38:04.510588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 10:38:04.510599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.510609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.513543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:04.026 [2024-11-20 10:38:04.513588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:04.026 [2024-11-20 10:38:04.514567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:04.026 [2024-11-20 10:38:04.514599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:04.026 [2024-11-20 10:38:04.514858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 10:38:04.514879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210e8c0 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 10:38:04.514891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e8c0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 10:38:04.515110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 10:38:04.515132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cedd50 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 10:38:04.515142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedd50 is same with the state(6) to be set 00:21:04.026 [2024-11-20 10:38:04.515198] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.026 [2024-11-20 10:38:04.515550] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.026 [2024-11-20 10:38:04.515604] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.026 [2024-11-20 10:38:04.515666] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.026 [2024-11-20 10:38:04.515719] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.026 [2024-11-20 10:38:04.515780] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.026 [2024-11-20 10:38:04.516203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 10:38:04.516223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2146460 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 10:38:04.516234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146460 is same with the state(6) to be set 00:21:04.026 [2024-11-20 10:38:04.516317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 10:38:04.516331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213b9b0 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 10:38:04.516342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b9b0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 10:38:04.516355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e8c0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 10:38:04.516369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedd50 (9): Bad file descriptor 00:21:04.026 [2024-11-20 10:38:04.516565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2146460 (9): Bad file descriptor 00:21:04.026 [2024-11-20 10:38:04.516580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b9b0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 10:38:04.516589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 10:38:04.516597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 10:38:04.516606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:04.026 [2024-11-20 10:38:04.516615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 10:38:04.516624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 10:38:04.516631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 10:38:04.516639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:04.026 [2024-11-20 10:38:04.516646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 10:38:04.516693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.516992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.516999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.517009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.517016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.517025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.517033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.517042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.517049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.517058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.517065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.517074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.517081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.026 [2024-11-20 10:38:04.517089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.026 [2024-11-20 10:38:04.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.027 [2024-11-20 10:38:04.517704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.027 [2024-11-20 10:38:04.517713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.517721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.517729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.517735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.517743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2450 is same with the state(6) to be set 00:21:04.028 [2024-11-20 10:38:04.518760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.518987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.518994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.028 [2024-11-20 10:38:04.519254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.028 [2024-11-20 10:38:04.519263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.519823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.519832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c3d60 is same with the state(6) to be set 00:21:04.029 [2024-11-20 10:38:04.520847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.520860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.520872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.520879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.029 [2024-11-20 10:38:04.520896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.029 [2024-11-20 10:38:04.520905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.520913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.520922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.520929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.520938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.520945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.520961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.520969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.520977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.520985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.520994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.030 [2024-11-20 10:38:04.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.030 [2024-11-20 10:38:04.521543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.521885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.521893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f10f0 is same with the state(6) to be set 00:21:04.031 [2024-11-20 10:38:04.522898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.522910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.522921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.522929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.522938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.522946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.522959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.522966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.522975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.522982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.522991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.522999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.031 [2024-11-20 10:38:04.523171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.031 [2024-11-20 10:38:04.523178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.032 [2024-11-20 10:38:04.523803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.032 [2024-11-20 10:38:04.523811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.523954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.523963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f3a90 is same with the state(6) to be set 00:21:04.033 [2024-11-20 10:38:04.524962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.524976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.524988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.524996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.033 [2024-11-20 10:38:04.525384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.033 [2024-11-20 10:38:04.525391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.525986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.525996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.034 [2024-11-20 10:38:04.526003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.034 [2024-11-20 10:38:04.526012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.526019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.526027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f4fc0 is same with the state(6) to be set 00:21:04.035 [2024-11-20 10:38:04.527025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.035 [2024-11-20 10:38:04.527657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.035 [2024-11-20 10:38:04.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.527985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.527992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.528025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.528041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.528073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.036 [2024-11-20 10:38:04.528088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.036 [2024-11-20 10:38:04.528097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32c3290 is same with the state(6) to be set 00:21:04.036 [2024-11-20 10:38:04.529081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:04.036 [2024-11-20 10:38:04.529100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:04.036 [2024-11-20 10:38:04.529110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:04.036 [2024-11-20 10:38:04.529119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:04.036 [2024-11-20 10:38:04.529155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:04.036 [2024-11-20 10:38:04.529164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:04.036 [2024-11-20 10:38:04.529173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:04.036 [2024-11-20 10:38:04.529181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:04.036 [2024-11-20 10:38:04.529190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:04.036 [2024-11-20 10:38:04.529197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:04.036 [2024-11-20 10:38:04.529204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:04.036 [2024-11-20 10:38:04.529213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:04.036 [2024-11-20 10:38:04.529262] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:04.036 [2024-11-20 10:38:04.529275] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:04.036 [2024-11-20 10:38:04.529340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:04.036 task offset: 27648 on job bdev=Nvme5n1 fails 00:21:04.036 00:21:04.036 Latency(us) 00:21:04.036 [2024-11-20T09:38:04.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.036 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:04.036 Verification LBA range: start 0x0 length 0x400 00:21:04.036 Nvme1n1 : 0.91 209.99 13.12 70.00 0.00 226049.11 16982.37 217921.45 00:21:04.036 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.036 Job: Nvme2n1 ended in about 0.91 seconds with error 00:21:04.036 Verification LBA range: start 0x0 length 0x400 00:21:04.036 Nvme2n1 : 0.91 211.57 13.22 70.52 0.00 220341.43 18692.01 221568.67 00:21:04.036 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.036 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:04.036 Verification LBA range: start 0x0 length 0x400 00:21:04.036 Nvme3n1 : 0.92 213.87 13.37 69.84 0.00 215174.85 19603.81 209715.20 00:21:04.036 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.036 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:04.036 Verification LBA range: start 0x0 length 0x400 00:21:04.036 Nvme4n1 : 0.92 209.04 13.06 69.68 0.00 215139.62 13962.02 221568.67 00:21:04.037 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.037 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:04.037 Verification LBA range: start 0x0 length 0x400 00:21:04.037 Nvme5n1 : 0.90 212.56 13.28 70.85 0.00 207334.85 14930.81 224304.08 00:21:04.037 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.037 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:04.037 Verification LBA range: start 0x0 length 0x400 00:21:04.037 Nvme6n1 : 0.92 212.92 13.31 69.52 0.00 204485.47 15728.64 220656.86 00:21:04.037 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.037 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:04.037 Verification LBA range: start 0x0 length 0x400 00:21:04.037 Nvme7n1 : 0.92 208.11 13.01 69.37 0.00 204204.08 15158.76 216097.84 00:21:04.037 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.037 Job: Nvme8n1 ended in about 0.90 seconds with error 00:21:04.037 Verification LBA range: start 0x0 length 0x400 00:21:04.037 Nvme8n1 : 0.90 212.25 13.27 70.75 0.00 195695.30 14588.88 221568.67 00:21:04.037 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.037 Job: Nvme9n1 ended in about 0.92 seconds with error 00:21:04.037 Verification LBA range: start 0x0 length 0x400 00:21:04.037 Nvme9n1 : 0.92 138.43 8.65 69.21 0.00 262507.00 18805.98 242540.19 00:21:04.037 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.037 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:04.037 Verification LBA range: start 0x0 length 0x400 00:21:04.037 Nvme10n1 : 0.91 211.23 13.20 70.41 0.00 189014.82 5926.73 219745.06 00:21:04.037 [2024-11-20T09:38:04.768Z] =================================================================================================================== 00:21:04.037 [2024-11-20T09:38:04.768Z] Total : 2039.95 127.50 700.15 0.00 212741.42 5926.73 242540.19 00:21:04.037 [2024-11-20 10:38:04.561526] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:04.037 [2024-11-20 10:38:04.561585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:04.037 [2024-11-20 10:38:04.561902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.561921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cee1b0 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.561932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cee1b0 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.562353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.562366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3640 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.562374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3640 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.562565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.562577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119a90 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.562585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119a90 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.562803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.562815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210e370 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.562823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e370 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.564209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:04.037 [2024-11-20 10:38:04.564228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:04.037 [2024-11-20 10:38:04.564238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:04.037 [2024-11-20 10:38:04.564247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:04.037 [2024-11-20 10:38:04.564531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.564545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c02610 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.564553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c02610 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.564751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.564763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214ba60 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.564771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214ba60 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.564784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cee1b0 (9): Bad file descriptor 00:21:04.037 [2024-11-20 10:38:04.564797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3640 (9): Bad file descriptor 00:21:04.037 [2024-11-20 10:38:04.564806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119a90 (9): Bad file descriptor 00:21:04.037 [2024-11-20 10:38:04.564815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e370 (9): Bad file descriptor 00:21:04.037 [2024-11-20 10:38:04.564847] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:04.037 [2024-11-20 10:38:04.564862] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:04.037 [2024-11-20 10:38:04.564871] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:04.037 [2024-11-20 10:38:04.564885] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:04.037 [2024-11-20 10:38:04.565145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.565160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cedd50 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.565168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedd50 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.565320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.565332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210e8c0 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.565339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e8c0 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.565552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.565565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213b9b0 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.565572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b9b0 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.565792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.037 [2024-11-20 10:38:04.565803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2146460 with addr=10.0.0.2, port=4420 00:21:04.037 [2024-11-20 10:38:04.565811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146460 is same with the state(6) to be set 00:21:04.037 [2024-11-20 10:38:04.565821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c02610 (9): Bad file descriptor 00:21:04.037 [2024-11-20 10:38:04.565830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214ba60 (9): Bad file descriptor 00:21:04.037 [2024-11-20 10:38:04.565839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:04.037 [2024-11-20 10:38:04.565847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:04.037 [2024-11-20 10:38:04.565856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:04.037 [2024-11-20 10:38:04.565865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:04.037 [2024-11-20 10:38:04.565873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:04.037 [2024-11-20 10:38:04.565879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:04.037 [2024-11-20 10:38:04.565886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:04.037 [2024-11-20 10:38:04.565892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:04.037 [2024-11-20 10:38:04.565900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:04.037 [2024-11-20 10:38:04.565907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:04.037 [2024-11-20 10:38:04.565914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:04.037 [2024-11-20 10:38:04.565920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:04.037 [2024-11-20 10:38:04.565927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:04.037 [2024-11-20 10:38:04.565936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.565943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.565998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:04.038 [2024-11-20 10:38:04.566072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedd50 (9): Bad file descriptor 00:21:04.038 [2024-11-20 10:38:04.566085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e8c0 (9): Bad file descriptor 00:21:04.038 [2024-11-20 10:38:04.566094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b9b0 (9): Bad file descriptor 00:21:04.038 [2024-11-20 10:38:04.566102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2146460 (9): Bad file descriptor 00:21:04.038 [2024-11-20 10:38:04.566110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:04.038 [2024-11-20 10:38:04.566117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.566124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.566130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:04.038 [2024-11-20 10:38:04.566138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:04.038 [2024-11-20 10:38:04.566144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.566151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.566157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:04.038 [2024-11-20 10:38:04.566182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:04.038 [2024-11-20 10:38:04.566190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.566198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.566204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:04.038 [2024-11-20 10:38:04.566212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:04.038 [2024-11-20 10:38:04.566218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.566229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.566236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:04.038 [2024-11-20 10:38:04.566242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:04.038 [2024-11-20 10:38:04.566249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.566256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.566263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:04.038 [2024-11-20 10:38:04.566270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:04.038 [2024-11-20 10:38:04.566276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:04.038 [2024-11-20 10:38:04.566286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:04.038 [2024-11-20 10:38:04.566293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:04.296 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3545313 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3545313 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3545313 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.232 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.232 rmmod nvme_tcp 00:21:05.232 rmmod nvme_fabrics 00:21:05.232 rmmod nvme_keyring 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3545039 ']' 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3545039 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3545039 ']' 00:21:05.490 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3545039 00:21:05.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3545039) - No such process 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3545039 is not found' 00:21:05.491 Process with pid 3545039 is not found 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.491 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.394 00:21:07.394 real 0m7.551s 00:21:07.394 user 0m18.416s 00:21:07.394 sys 0m1.337s 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.394 ************************************ 00:21:07.394 END TEST nvmf_shutdown_tc3 00:21:07.394 ************************************ 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.394 ************************************ 00:21:07.394 START TEST nvmf_shutdown_tc4 00:21:07.394 ************************************ 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:07.394 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.653 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.654 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.654 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.654 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:07.654 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:07.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:21:07.654 00:21:07.655 --- 10.0.0.2 ping statistics --- 00:21:07.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.655 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:21:07.655 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:07.913 00:21:07.913 --- 10.0.0.1 ping statistics --- 00:21:07.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.913 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.913 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3546418 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3546418 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3546418 ']' 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.914 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.914 [2024-11-20 10:38:08.485579] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:07.914 [2024-11-20 10:38:08.485628] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.914 [2024-11-20 10:38:08.563384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.914 [2024-11-20 10:38:08.605868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.914 [2024-11-20 10:38:08.605905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.914 [2024-11-20 10:38:08.605913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.914 [2024-11-20 10:38:08.605919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.914 [2024-11-20 10:38:08.605925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.914 [2024-11-20 10:38:08.607604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.914 [2024-11-20 10:38:08.607714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.914 [2024-11-20 10:38:08.607820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.914 [2024-11-20 10:38:08.607820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.172 [2024-11-20 10:38:08.745604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.172 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.172 Malloc1 00:21:08.172 [2024-11-20 10:38:08.859536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.172 Malloc2 00:21:08.431 Malloc3 00:21:08.431 Malloc4 00:21:08.431 Malloc5 00:21:08.431 Malloc6 00:21:08.431 Malloc7 00:21:08.431 Malloc8 00:21:08.688 Malloc9 00:21:08.688 Malloc10 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3546638 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:08.688 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:08.688 [2024-11-20 10:38:09.364575] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.959 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3546418 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3546418 ']' 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3546418 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546418 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546418' 00:21:13.960 killing process with pid 3546418 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3546418 00:21:13.960 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3546418 00:21:13.960 [2024-11-20 10:38:14.356729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.356837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17187f0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.357623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cc0 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 [2024-11-20 10:38:14.358470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717e50 is same with the state(6) to be set 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 [2024-11-20 10:38:14.365942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.960 starting I/O failed: -6 00:21:13.960 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 [2024-11-20 10:38:14.366867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 [2024-11-20 10:38:14.367887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.961 Write completed with error (sct=0, sc=8) 00:21:13.961 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 [2024-11-20 10:38:14.369675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.962 NVMe io qpair process completion error 00:21:13.962 [2024-11-20 10:38:14.370013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715ca0 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716170 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716170 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716170 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.370624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716640 is same with the state(6) to be set 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 [2024-11-20 10:38:14.371726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.962 [2024-11-20 10:38:14.372013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with the state(6) to be set 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 [2024-11-20 10:38:14.372028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with Write completed with error (sct=0, sc=8) 00:21:13.962 the state(6) to be set 00:21:13.962 starting I/O failed: -6 00:21:13.962 [2024-11-20 10:38:14.372045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with Write completed with error (sct=0, sc=8) 00:21:13.962 the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682250 is same with the state(6) to be set 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 [2024-11-20 10:38:14.372638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16818b0 is same with Write completed with error (sct=0, sc=8) 00:21:13.962 the state(6) to be set 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 [2024-11-20 10:38:14.372662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16818b0 is same with the state(6) to be set 00:21:13.962 starting I/O failed: -6 00:21:13.962 [2024-11-20 10:38:14.372672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16818b0 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16818b0 is same with the state(6) to be set 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 [2024-11-20 10:38:14.372691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16818b0 is same with the state(6) to be set 00:21:13.962 [2024-11-20 10:38:14.372698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16818b0 is same with the state(6) to be set 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 [2024-11-20 10:38:14.372744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.962 NVMe io qpair process completion error 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 starting I/O failed: -6 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.962 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 [2024-11-20 10:38:14.373712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 [2024-11-20 10:38:14.374626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 [2024-11-20 10:38:14.375633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.963 Write completed with error (sct=0, sc=8) 00:21:13.963 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 [2024-11-20 10:38:14.377162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.964 NVMe io qpair process completion error 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 [2024-11-20 10:38:14.378164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 starting I/O failed: -6 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.964 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 [2024-11-20 10:38:14.379082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 [2024-11-20 10:38:14.380087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.965 Write completed with error (sct=0, sc=8) 00:21:13.965 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 [2024-11-20 10:38:14.381901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.966 NVMe io qpair process completion error 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 [2024-11-20 10:38:14.382893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 [2024-11-20 10:38:14.383766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.966 Write completed with error (sct=0, sc=8) 00:21:13.966 starting I/O failed: -6 00:21:13.967 [2024-11-20 10:38:14.384821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 [2024-11-20 10:38:14.386587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.967 NVMe io qpair process completion error 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 [2024-11-20 10:38:14.387645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 starting I/O failed: -6 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.967 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 [2024-11-20 10:38:14.388557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 [2024-11-20 10:38:14.389565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.968 starting I/O failed: -6 00:21:13.968 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 [2024-11-20 10:38:14.394922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.969 NVMe io qpair process completion error 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 [2024-11-20 10:38:14.395873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.969 starting I/O failed: -6 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 [2024-11-20 10:38:14.396804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.969 Write completed with error (sct=0, sc=8) 00:21:13.969 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 [2024-11-20 10:38:14.397867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 [2024-11-20 10:38:14.399906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.970 NVMe io qpair process completion error 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 starting I/O failed: -6 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.970 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 [2024-11-20 10:38:14.400775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 [2024-11-20 10:38:14.401678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 [2024-11-20 10:38:14.402716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.971 Write completed with error (sct=0, sc=8) 00:21:13.971 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 [2024-11-20 10:38:14.404719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.972 NVMe io qpair process completion error 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 [2024-11-20 10:38:14.405722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 starting I/O failed: -6 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.972 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 [2024-11-20 10:38:14.406614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 [2024-11-20 10:38:14.407654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.973 Write completed with error (sct=0, sc=8) 00:21:13.973 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 [2024-11-20 10:38:14.415301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.974 NVMe io qpair process completion error 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 [2024-11-20 10:38:14.416317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 [2024-11-20 10:38:14.417243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.974 Write completed with error (sct=0, sc=8) 00:21:13.974 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 [2024-11-20 10:38:14.418344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 Write completed with error (sct=0, sc=8) 00:21:13.975 starting I/O failed: -6 00:21:13.975 [2024-11-20 10:38:14.420241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.975 NVMe io qpair process completion error 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Write completed with error (sct=0, sc=8) 00:21:13.976 Initializing NVMe Controllers 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:13.976 Controller IO queue size 128, less than required. 00:21:13.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:13.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:13.976 Initialization complete. Launching workers. 00:21:13.976 ======================================================== 00:21:13.976 Latency(us) 00:21:13.976 Device Information : IOPS MiB/s Average min max 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2116.87 90.96 60473.08 939.72 113039.59 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2136.13 91.79 59986.92 670.90 112092.81 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2168.96 93.20 59093.66 835.71 108971.73 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2139.19 91.92 59933.18 892.55 114281.01 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2142.70 92.07 59924.59 862.74 122516.01 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2177.94 93.58 58260.32 692.61 106678.59 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2182.97 93.80 58631.15 424.73 121723.97 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2150.14 92.39 59017.73 941.12 105397.83 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2133.07 91.66 59499.72 750.89 105005.08 00:21:13.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2135.91 91.78 59432.66 914.80 105309.29 00:21:13.976 ======================================================== 00:21:13.976 Total : 21483.88 923.14 59419.80 424.73 122516.01 00:21:13.976 00:21:13.977 [2024-11-20 10:38:14.426414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583ae0 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582a70 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582740 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582410 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581ef0 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583900 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581bc0 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581560 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583720 is same with the state(6) to be set 00:21:13.977 [2024-11-20 10:38:14.426730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581890 is same with the state(6) to be set 00:21:13.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:14.236 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3546638 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3546638 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3546638 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.172 rmmod nvme_tcp 00:21:15.172 rmmod nvme_fabrics 00:21:15.172 rmmod nvme_keyring 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3546418 ']' 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3546418 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3546418 ']' 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3546418 00:21:15.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3546418) - No such process 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3546418 is not found' 00:21:15.172 Process with pid 3546418 is not found 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.172 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:17.706 00:21:17.706 real 0m9.774s 00:21:17.706 user 0m24.941s 00:21:17.706 sys 0m5.141s 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:17.706 ************************************ 00:21:17.706 END TEST nvmf_shutdown_tc4 00:21:17.706 ************************************ 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:17.706 00:21:17.706 real 0m40.870s 00:21:17.706 user 1m40.894s 00:21:17.706 sys 0m14.038s 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:17.706 ************************************ 00:21:17.706 END TEST nvmf_shutdown 00:21:17.706 ************************************ 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.706 10:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.706 ************************************ 00:21:17.706 START TEST nvmf_nsid 00:21:17.706 ************************************ 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:17.706 * Looking for test storage... 00:21:17.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:17.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.706 --rc genhtml_branch_coverage=1 00:21:17.706 --rc genhtml_function_coverage=1 00:21:17.706 --rc genhtml_legend=1 00:21:17.706 --rc geninfo_all_blocks=1 00:21:17.706 --rc geninfo_unexecuted_blocks=1 00:21:17.706 00:21:17.706 ' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:17.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.706 --rc genhtml_branch_coverage=1 00:21:17.706 --rc genhtml_function_coverage=1 00:21:17.706 --rc genhtml_legend=1 00:21:17.706 --rc geninfo_all_blocks=1 00:21:17.706 --rc geninfo_unexecuted_blocks=1 00:21:17.706 00:21:17.706 ' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:17.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.706 --rc genhtml_branch_coverage=1 00:21:17.706 --rc genhtml_function_coverage=1 00:21:17.706 --rc genhtml_legend=1 00:21:17.706 --rc geninfo_all_blocks=1 00:21:17.706 --rc geninfo_unexecuted_blocks=1 00:21:17.706 00:21:17.706 ' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:17.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.706 --rc genhtml_branch_coverage=1 00:21:17.706 --rc genhtml_function_coverage=1 00:21:17.706 --rc genhtml_legend=1 00:21:17.706 --rc geninfo_all_blocks=1 00:21:17.706 --rc geninfo_unexecuted_blocks=1 00:21:17.706 00:21:17.706 ' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.706 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.707 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.290 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.290 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.290 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.290 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.290 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.290 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.290 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.290 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.290 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:21:24.291 00:21:24.291 --- 10.0.0.2 ping statistics --- 00:21:24.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.291 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:24.291 00:21:24.291 --- 10.0.0.1 ping statistics --- 00:21:24.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.291 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3551104 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3551104 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3551104 ']' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.291 [2024-11-20 10:38:24.234690] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:24.291 [2024-11-20 10:38:24.234746] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.291 [2024-11-20 10:38:24.318847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.291 [2024-11-20 10:38:24.359140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.291 [2024-11-20 10:38:24.359176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.291 [2024-11-20 10:38:24.359184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.291 [2024-11-20 10:38:24.359190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.291 [2024-11-20 10:38:24.359196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.291 [2024-11-20 10:38:24.359747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3551298 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0bfbc8fe-4b12-4469-a46f-ecff23258888 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e830a7e3-dc6c-4b46-82de-879b9260ffe8 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1bc46934-7237-4452-9f36-ab34c3ebdb5d 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.291 null0 00:21:24.291 null1 00:21:24.291 null2 00:21:24.291 [2024-11-20 10:38:24.556706] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:24.291 [2024-11-20 10:38:24.556752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551298 ] 00:21:24.291 [2024-11-20 10:38:24.560504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.291 [2024-11-20 10:38:24.584704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3551298 /var/tmp/tgt2.sock 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3551298 ']' 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:24.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.291 [2024-11-20 10:38:24.633873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.291 [2024-11-20 10:38:24.675181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:24.291 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:24.548 [2024-11-20 10:38:25.207978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.548 [2024-11-20 10:38:25.224090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:24.548 nvme0n1 nvme0n2 00:21:24.548 nvme1n1 00:21:24.804 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:24.804 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:24.804 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:25.736 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0bfbc8fe-4b12-4469-a46f-ecff23258888 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:26.667 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0bfbc8fe4b124469a46fecff23258888 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0BFBC8FE4B124469A46FECFF23258888 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0BFBC8FE4B124469A46FECFF23258888 == \0\B\F\B\C\8\F\E\4\B\1\2\4\4\6\9\A\4\6\F\E\C\F\F\2\3\2\5\8\8\8\8 ]] 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e830a7e3-dc6c-4b46-82de-879b9260ffe8 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e830a7e3dc6c4b4682de879b9260ffe8 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E830A7E3DC6C4B4682DE879B9260FFE8 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E830A7E3DC6C4B4682DE879B9260FFE8 == \E\8\3\0\A\7\E\3\D\C\6\C\4\B\4\6\8\2\D\E\8\7\9\B\9\2\6\0\F\F\E\8 ]] 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1bc46934-7237-4452-9f36-ab34c3ebdb5d 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1bc46934723744529f36ab34c3ebdb5d 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1BC46934723744529F36AB34C3EBDB5D 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1BC46934723744529F36AB34C3EBDB5D == \1\B\C\4\6\9\3\4\7\2\3\7\4\4\5\2\9\F\3\6\A\B\3\4\C\3\E\B\D\B\5\D ]] 00:21:26.924 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3551298 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3551298 ']' 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3551298 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551298 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551298' 00:21:27.182 killing process with pid 3551298 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3551298 00:21:27.182 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3551298 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.439 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.439 rmmod nvme_tcp 00:21:27.439 rmmod nvme_fabrics 00:21:27.439 rmmod nvme_keyring 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3551104 ']' 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3551104 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3551104 ']' 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3551104 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551104 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551104' 00:21:27.698 killing process with pid 3551104 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3551104 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3551104 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.698 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.229 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:30.229 00:21:30.229 real 0m12.459s 00:21:30.229 user 0m9.738s 00:21:30.229 sys 0m5.552s 00:21:30.229 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.229 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.229 ************************************ 00:21:30.229 END TEST nvmf_nsid 00:21:30.229 ************************************ 00:21:30.229 10:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:30.229 00:21:30.229 real 12m4.898s 00:21:30.229 user 26m4.896s 00:21:30.229 sys 3m45.291s 00:21:30.229 10:38:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.229 10:38:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:30.229 ************************************ 00:21:30.229 END TEST nvmf_target_extra 00:21:30.229 ************************************ 00:21:30.229 10:38:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:30.229 10:38:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:30.229 10:38:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.229 10:38:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.229 ************************************ 00:21:30.229 START TEST nvmf_host 00:21:30.229 ************************************ 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:30.229 * Looking for test storage... 00:21:30.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:30.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.229 --rc genhtml_branch_coverage=1 00:21:30.229 --rc genhtml_function_coverage=1 00:21:30.229 --rc genhtml_legend=1 00:21:30.229 --rc geninfo_all_blocks=1 00:21:30.229 --rc geninfo_unexecuted_blocks=1 00:21:30.229 00:21:30.229 ' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:30.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.229 --rc genhtml_branch_coverage=1 00:21:30.229 --rc genhtml_function_coverage=1 00:21:30.229 --rc genhtml_legend=1 00:21:30.229 --rc geninfo_all_blocks=1 00:21:30.229 --rc geninfo_unexecuted_blocks=1 00:21:30.229 00:21:30.229 ' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:30.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.229 --rc genhtml_branch_coverage=1 00:21:30.229 --rc genhtml_function_coverage=1 00:21:30.229 --rc genhtml_legend=1 00:21:30.229 --rc geninfo_all_blocks=1 00:21:30.229 --rc geninfo_unexecuted_blocks=1 00:21:30.229 00:21:30.229 ' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:30.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.229 --rc genhtml_branch_coverage=1 00:21:30.229 --rc genhtml_function_coverage=1 00:21:30.229 --rc genhtml_legend=1 00:21:30.229 --rc geninfo_all_blocks=1 00:21:30.229 --rc geninfo_unexecuted_blocks=1 00:21:30.229 00:21:30.229 ' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.229 10:38:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:30.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.230 ************************************ 00:21:30.230 START TEST nvmf_multicontroller 00:21:30.230 ************************************ 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:30.230 * Looking for test storage... 00:21:30.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:30.230 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:30.487 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:30.487 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.488 --rc genhtml_branch_coverage=1 00:21:30.488 --rc genhtml_function_coverage=1 00:21:30.488 --rc genhtml_legend=1 00:21:30.488 --rc geninfo_all_blocks=1 00:21:30.488 --rc geninfo_unexecuted_blocks=1 00:21:30.488 00:21:30.488 ' 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.488 --rc genhtml_branch_coverage=1 00:21:30.488 --rc genhtml_function_coverage=1 00:21:30.488 --rc genhtml_legend=1 00:21:30.488 --rc geninfo_all_blocks=1 00:21:30.488 --rc geninfo_unexecuted_blocks=1 00:21:30.488 00:21:30.488 ' 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.488 --rc genhtml_branch_coverage=1 00:21:30.488 --rc genhtml_function_coverage=1 00:21:30.488 --rc genhtml_legend=1 00:21:30.488 --rc geninfo_all_blocks=1 00:21:30.488 --rc geninfo_unexecuted_blocks=1 00:21:30.488 00:21:30.488 ' 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.488 --rc genhtml_branch_coverage=1 00:21:30.488 --rc genhtml_function_coverage=1 00:21:30.488 --rc genhtml_legend=1 00:21:30.488 --rc geninfo_all_blocks=1 00:21:30.488 --rc geninfo_unexecuted_blocks=1 00:21:30.488 00:21:30.488 ' 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.488 10:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:30.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:30.488 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:30.489 10:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.047 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:37.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:37.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:37.048 Found net devices under 0000:86:00.0: cvl_0_0 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:37.048 Found net devices under 0000:86:00.1: cvl_0_1 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:21:37.048 00:21:37.048 --- 10.0.0.2 ping statistics --- 00:21:37.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.048 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:21:37.048 00:21:37.048 --- 10.0.0.1 ping statistics --- 00:21:37.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.048 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3555431 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3555431 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3555431 ']' 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.048 10:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.048 [2024-11-20 10:38:37.025170] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:37.048 [2024-11-20 10:38:37.025215] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.048 [2024-11-20 10:38:37.105590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:37.048 [2024-11-20 10:38:37.150265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.048 [2024-11-20 10:38:37.150295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.048 [2024-11-20 10:38:37.150303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.048 [2024-11-20 10:38:37.150309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.048 [2024-11-20 10:38:37.150314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.048 [2024-11-20 10:38:37.151750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.048 [2024-11-20 10:38:37.151855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.048 [2024-11-20 10:38:37.151857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.048 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.048 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 [2024-11-20 10:38:37.287483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 Malloc0 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 [2024-11-20 10:38:37.343846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 [2024-11-20 10:38:37.351778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 Malloc1 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3555607 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3555607 /var/tmp/bdevperf.sock 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3555607 ']' 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 NVMe0n1 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 1 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.307 request: 00:21:37.307 { 00:21:37.307 "name": "NVMe0", 00:21:37.307 "trtype": "tcp", 00:21:37.307 "traddr": "10.0.0.2", 00:21:37.307 "adrfam": "ipv4", 00:21:37.307 "trsvcid": "4420", 00:21:37.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.307 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:37.307 "hostaddr": "10.0.0.1", 00:21:37.307 "prchk_reftag": false, 00:21:37.307 "prchk_guard": false, 00:21:37.307 "hdgst": false, 00:21:37.307 "ddgst": false, 00:21:37.307 "allow_unrecognized_csi": false, 00:21:37.307 "method": "bdev_nvme_attach_controller", 00:21:37.307 "req_id": 1 00:21:37.307 } 00:21:37.307 Got JSON-RPC error response 00:21:37.307 response: 00:21:37.307 { 00:21:37.307 "code": -114, 00:21:37.307 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:37.307 } 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.307 request: 00:21:37.307 { 00:21:37.307 "name": "NVMe0", 00:21:37.307 "trtype": "tcp", 00:21:37.307 "traddr": "10.0.0.2", 00:21:37.307 "adrfam": "ipv4", 00:21:37.307 "trsvcid": "4420", 00:21:37.307 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:37.307 "hostaddr": "10.0.0.1", 00:21:37.307 "prchk_reftag": false, 00:21:37.307 "prchk_guard": false, 00:21:37.307 "hdgst": false, 00:21:37.307 "ddgst": false, 00:21:37.307 "allow_unrecognized_csi": false, 00:21:37.307 "method": "bdev_nvme_attach_controller", 00:21:37.307 "req_id": 1 00:21:37.307 } 00:21:37.307 Got JSON-RPC error response 00:21:37.307 response: 00:21:37.307 { 00:21:37.307 "code": -114, 00:21:37.307 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:37.307 } 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:37.307 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.308 request: 00:21:37.308 { 00:21:37.308 "name": "NVMe0", 00:21:37.308 "trtype": "tcp", 00:21:37.308 "traddr": "10.0.0.2", 00:21:37.308 "adrfam": "ipv4", 00:21:37.308 "trsvcid": "4420", 00:21:37.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.308 "hostaddr": "10.0.0.1", 00:21:37.308 "prchk_reftag": false, 00:21:37.308 "prchk_guard": false, 00:21:37.308 "hdgst": false, 00:21:37.308 "ddgst": false, 00:21:37.308 "multipath": "disable", 00:21:37.308 "allow_unrecognized_csi": false, 00:21:37.308 "method": "bdev_nvme_attach_controller", 00:21:37.308 "req_id": 1 00:21:37.308 } 00:21:37.308 Got JSON-RPC error response 00:21:37.308 response: 00:21:37.308 { 00:21:37.308 "code": -114, 00:21:37.308 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:37.308 } 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.308 request: 00:21:37.308 { 00:21:37.308 "name": "NVMe0", 00:21:37.308 "trtype": "tcp", 00:21:37.308 "traddr": "10.0.0.2", 00:21:37.308 "adrfam": "ipv4", 00:21:37.308 "trsvcid": "4420", 00:21:37.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.308 "hostaddr": "10.0.0.1", 00:21:37.308 "prchk_reftag": false, 00:21:37.308 "prchk_guard": false, 00:21:37.308 "hdgst": false, 00:21:37.308 "ddgst": false, 00:21:37.308 "multipath": "failover", 00:21:37.308 "allow_unrecognized_csi": false, 00:21:37.308 "method": "bdev_nvme_attach_controller", 00:21:37.308 "req_id": 1 00:21:37.308 } 00:21:37.308 Got JSON-RPC error response 00:21:37.308 response: 00:21:37.308 { 00:21:37.308 "code": -114, 00:21:37.308 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:37.308 } 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.308 NVMe0n1 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.308 10:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.565 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:37.565 10:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.938 { 00:21:38.938 "results": [ 00:21:38.938 { 00:21:38.938 "job": "NVMe0n1", 00:21:38.938 "core_mask": "0x1", 00:21:38.938 "workload": "write", 00:21:38.938 "status": "finished", 00:21:38.938 "queue_depth": 128, 00:21:38.938 "io_size": 4096, 00:21:38.938 "runtime": 1.006855, 00:21:38.938 "iops": 24161.373782719456, 00:21:38.938 "mibps": 94.38036633874788, 00:21:38.938 "io_failed": 0, 00:21:38.938 "io_timeout": 0, 00:21:38.938 "avg_latency_us": 5288.083794531393, 00:21:38.938 "min_latency_us": 3191.318260869565, 00:21:38.938 "max_latency_us": 12594.30956521739 00:21:38.938 } 00:21:38.938 ], 00:21:38.938 "core_count": 1 00:21:38.938 } 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3555607 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3555607 ']' 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3555607 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3555607 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3555607' 00:21:38.938 killing process with pid 3555607 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3555607 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3555607 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:38.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:38.938 [2024-11-20 10:38:37.456815] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:38.938 [2024-11-20 10:38:37.456864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555607 ] 00:21:38.938 [2024-11-20 10:38:37.531319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.938 [2024-11-20 10:38:37.572774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.938 [2024-11-20 10:38:38.119634] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 1019c9c0-341e-4bfd-89a9-8ae7dc2f81c8 already exists 00:21:38.938 [2024-11-20 10:38:38.119661] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:1019c9c0-341e-4bfd-89a9-8ae7dc2f81c8 alias for bdev NVMe1n1 00:21:38.938 [2024-11-20 10:38:38.119668] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:38.938 Running I/O for 1 seconds... 00:21:38.938 24106.00 IOPS, 94.16 MiB/s 00:21:38.938 Latency(us) 00:21:38.938 [2024-11-20T09:38:39.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.938 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:38.938 NVMe0n1 : 1.01 24161.37 94.38 0.00 0.00 5288.08 3191.32 12594.31 00:21:38.938 [2024-11-20T09:38:39.669Z] =================================================================================================================== 00:21:38.938 [2024-11-20T09:38:39.669Z] Total : 24161.37 94.38 0.00 0.00 5288.08 3191.32 12594.31 00:21:38.938 Received shutdown signal, test time was about 1.000000 seconds 00:21:38.938 00:21:38.938 Latency(us) 00:21:38.938 [2024-11-20T09:38:39.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.938 [2024-11-20T09:38:39.669Z] =================================================================================================================== 00:21:38.938 [2024-11-20T09:38:39.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.938 rmmod nvme_tcp 00:21:38.938 rmmod nvme_fabrics 00:21:38.938 rmmod nvme_keyring 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3555431 ']' 00:21:38.938 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3555431 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3555431 ']' 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3555431 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3555431 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3555431' 00:21:38.939 killing process with pid 3555431 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3555431 00:21:38.939 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3555431 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.197 10:38:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:41.730 00:21:41.730 real 0m11.096s 00:21:41.730 user 0m12.020s 00:21:41.730 sys 0m5.186s 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.730 ************************************ 00:21:41.730 END TEST nvmf_multicontroller 00:21:41.730 ************************************ 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.730 ************************************ 00:21:41.730 START TEST nvmf_aer 00:21:41.730 ************************************ 00:21:41.730 10:38:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:41.730 * Looking for test storage... 00:21:41.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:41.730 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:41.730 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:41.730 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:41.730 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:41.730 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.730 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.731 10:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.346 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:48.347 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:48.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:48.347 Found net devices under 0000:86:00.0: cvl_0_0 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:48.347 Found net devices under 0000:86:00.1: cvl_0_1 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.347 10:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:21:48.347 00:21:48.347 --- 10.0.0.2 ping statistics --- 00:21:48.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.347 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:48.347 00:21:48.347 --- 10.0.0.1 ping statistics --- 00:21:48.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.347 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3559441 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3559441 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3559441 ']' 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.347 10:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.348 [2024-11-20 10:38:48.215411] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:48.348 [2024-11-20 10:38:48.215460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.348 [2024-11-20 10:38:48.296170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.348 [2024-11-20 10:38:48.338060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.348 [2024-11-20 10:38:48.338096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.348 [2024-11-20 10:38:48.338104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.348 [2024-11-20 10:38:48.338111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.348 [2024-11-20 10:38:48.338117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.348 [2024-11-20 10:38:48.339772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.348 [2024-11-20 10:38:48.339903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.348 [2024-11-20 10:38:48.340015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.348 [2024-11-20 10:38:48.340016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.348 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.348 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:48.348 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.348 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.348 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 [2024-11-20 10:38:49.107450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 Malloc0 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 [2024-11-20 10:38:49.174761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 [ 00:21:48.607 { 00:21:48.607 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:48.607 "subtype": "Discovery", 00:21:48.607 "listen_addresses": [], 00:21:48.607 "allow_any_host": true, 00:21:48.607 "hosts": [] 00:21:48.607 }, 00:21:48.607 { 00:21:48.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.607 "subtype": "NVMe", 00:21:48.607 "listen_addresses": [ 00:21:48.607 { 00:21:48.607 "trtype": "TCP", 00:21:48.607 "adrfam": "IPv4", 00:21:48.607 "traddr": "10.0.0.2", 00:21:48.607 "trsvcid": "4420" 00:21:48.607 } 00:21:48.607 ], 00:21:48.607 "allow_any_host": true, 00:21:48.607 "hosts": [], 00:21:48.607 "serial_number": "SPDK00000000000001", 00:21:48.607 "model_number": "SPDK bdev Controller", 00:21:48.607 "max_namespaces": 2, 00:21:48.607 "min_cntlid": 1, 00:21:48.607 "max_cntlid": 65519, 00:21:48.607 "namespaces": [ 00:21:48.607 { 00:21:48.607 "nsid": 1, 00:21:48.607 "bdev_name": "Malloc0", 00:21:48.607 "name": "Malloc0", 00:21:48.607 "nguid": "1F634E96B2B74E5B902E8D998197CC42", 00:21:48.607 "uuid": "1f634e96-b2b7-4e5b-902e-8d998197cc42" 00:21:48.607 } 00:21:48.607 ] 00:21:48.607 } 00:21:48.607 ] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3559692 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:48.607 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.867 Malloc1 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.867 Asynchronous Event Request test 00:21:48.867 Attaching to 10.0.0.2 00:21:48.867 Attached to 10.0.0.2 00:21:48.867 Registering asynchronous event callbacks... 00:21:48.867 Starting namespace attribute notice tests for all controllers... 00:21:48.867 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:48.867 aer_cb - Changed Namespace 00:21:48.867 Cleaning up... 00:21:48.867 [ 00:21:48.867 { 00:21:48.867 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:48.867 "subtype": "Discovery", 00:21:48.867 "listen_addresses": [], 00:21:48.867 "allow_any_host": true, 00:21:48.867 "hosts": [] 00:21:48.867 }, 00:21:48.867 { 00:21:48.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.867 "subtype": "NVMe", 00:21:48.867 "listen_addresses": [ 00:21:48.867 { 00:21:48.867 "trtype": "TCP", 00:21:48.867 "adrfam": "IPv4", 00:21:48.867 "traddr": "10.0.0.2", 00:21:48.867 "trsvcid": "4420" 00:21:48.867 } 00:21:48.867 ], 00:21:48.867 "allow_any_host": true, 00:21:48.867 "hosts": [], 00:21:48.867 "serial_number": "SPDK00000000000001", 00:21:48.867 "model_number": "SPDK bdev Controller", 00:21:48.867 "max_namespaces": 2, 00:21:48.867 "min_cntlid": 1, 00:21:48.867 "max_cntlid": 65519, 00:21:48.867 "namespaces": [ 00:21:48.867 { 00:21:48.867 "nsid": 1, 00:21:48.867 "bdev_name": "Malloc0", 00:21:48.867 "name": "Malloc0", 00:21:48.867 "nguid": "1F634E96B2B74E5B902E8D998197CC42", 00:21:48.867 "uuid": "1f634e96-b2b7-4e5b-902e-8d998197cc42" 00:21:48.867 }, 00:21:48.867 { 00:21:48.867 "nsid": 2, 00:21:48.867 "bdev_name": "Malloc1", 00:21:48.867 "name": "Malloc1", 00:21:48.867 "nguid": "B6473917D35F4FAFAD5BFE8E5FC2435B", 00:21:48.867 "uuid": "b6473917-d35f-4faf-ad5b-fe8e5fc2435b" 00:21:48.867 } 00:21:48.867 ] 00:21:48.867 } 00:21:48.867 ] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3559692 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.867 rmmod nvme_tcp 00:21:48.867 rmmod nvme_fabrics 00:21:48.867 rmmod nvme_keyring 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3559441 ']' 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3559441 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3559441 ']' 00:21:48.867 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3559441 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559441 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559441' 00:21:49.126 killing process with pid 3559441 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3559441 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3559441 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.126 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.127 10:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.661 00:21:51.661 real 0m9.888s 00:21:51.661 user 0m7.825s 00:21:51.661 sys 0m4.895s 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.661 ************************************ 00:21:51.661 END TEST nvmf_aer 00:21:51.661 ************************************ 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.661 ************************************ 00:21:51.661 START TEST nvmf_async_init 00:21:51.661 ************************************ 00:21:51.661 10:38:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:51.661 * Looking for test storage... 00:21:51.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.661 --rc genhtml_branch_coverage=1 00:21:51.661 --rc genhtml_function_coverage=1 00:21:51.661 --rc genhtml_legend=1 00:21:51.661 --rc geninfo_all_blocks=1 00:21:51.661 --rc geninfo_unexecuted_blocks=1 00:21:51.661 00:21:51.661 ' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.661 --rc genhtml_branch_coverage=1 00:21:51.661 --rc genhtml_function_coverage=1 00:21:51.661 --rc genhtml_legend=1 00:21:51.661 --rc geninfo_all_blocks=1 00:21:51.661 --rc geninfo_unexecuted_blocks=1 00:21:51.661 00:21:51.661 ' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.661 --rc genhtml_branch_coverage=1 00:21:51.661 --rc genhtml_function_coverage=1 00:21:51.661 --rc genhtml_legend=1 00:21:51.661 --rc geninfo_all_blocks=1 00:21:51.661 --rc geninfo_unexecuted_blocks=1 00:21:51.661 00:21:51.661 ' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.661 --rc genhtml_branch_coverage=1 00:21:51.661 --rc genhtml_function_coverage=1 00:21:51.661 --rc genhtml_legend=1 00:21:51.661 --rc geninfo_all_blocks=1 00:21:51.661 --rc geninfo_unexecuted_blocks=1 00:21:51.661 00:21:51.661 ' 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.661 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e29404f194ae4edbaf2e81bdb918ad2b 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.662 10:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.229 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:58.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:58.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:58.230 Found net devices under 0000:86:00.0: cvl_0_0 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:58.230 Found net devices under 0000:86:00.1: cvl_0_1 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.230 10:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:21:58.230 00:21:58.230 --- 10.0.0.2 ping statistics --- 00:21:58.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.230 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:58.230 00:21:58.230 --- 10.0.0.1 ping statistics --- 00:21:58.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.230 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3563216 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3563216 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3563216 ']' 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.230 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.230 [2024-11-20 10:38:58.140241] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:21:58.230 [2024-11-20 10:38:58.140295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.230 [2024-11-20 10:38:58.218804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.230 [2024-11-20 10:38:58.260995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.230 [2024-11-20 10:38:58.261029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.231 [2024-11-20 10:38:58.261036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.231 [2024-11-20 10:38:58.261042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.231 [2024-11-20 10:38:58.261047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.231 [2024-11-20 10:38:58.261596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 [2024-11-20 10:38:58.392012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 null0 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e29404f194ae4edbaf2e81bdb918ad2b 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 [2024-11-20 10:38:58.444274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 nvme0n1 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 [ 00:21:58.231 { 00:21:58.231 "name": "nvme0n1", 00:21:58.231 "aliases": [ 00:21:58.231 "e29404f1-94ae-4edb-af2e-81bdb918ad2b" 00:21:58.231 ], 00:21:58.231 "product_name": "NVMe disk", 00:21:58.231 "block_size": 512, 00:21:58.231 "num_blocks": 2097152, 00:21:58.231 "uuid": "e29404f1-94ae-4edb-af2e-81bdb918ad2b", 00:21:58.231 "numa_id": 1, 00:21:58.231 "assigned_rate_limits": { 00:21:58.231 "rw_ios_per_sec": 0, 00:21:58.231 "rw_mbytes_per_sec": 0, 00:21:58.231 "r_mbytes_per_sec": 0, 00:21:58.231 "w_mbytes_per_sec": 0 00:21:58.231 }, 00:21:58.231 "claimed": false, 00:21:58.231 "zoned": false, 00:21:58.231 "supported_io_types": { 00:21:58.231 "read": true, 00:21:58.231 "write": true, 00:21:58.231 "unmap": false, 00:21:58.231 "flush": true, 00:21:58.231 "reset": true, 00:21:58.231 "nvme_admin": true, 00:21:58.231 "nvme_io": true, 00:21:58.231 "nvme_io_md": false, 00:21:58.231 "write_zeroes": true, 00:21:58.231 "zcopy": false, 00:21:58.231 "get_zone_info": false, 00:21:58.231 "zone_management": false, 00:21:58.231 "zone_append": false, 00:21:58.231 "compare": true, 00:21:58.231 "compare_and_write": true, 00:21:58.231 "abort": true, 00:21:58.231 "seek_hole": false, 00:21:58.231 "seek_data": false, 00:21:58.231 "copy": true, 00:21:58.231 "nvme_iov_md": false 00:21:58.231 }, 00:21:58.231 "memory_domains": [ 00:21:58.231 { 00:21:58.231 "dma_device_id": "system", 00:21:58.231 "dma_device_type": 1 00:21:58.231 } 00:21:58.231 ], 00:21:58.231 "driver_specific": { 00:21:58.231 "nvme": [ 00:21:58.231 { 00:21:58.231 "trid": { 00:21:58.231 "trtype": "TCP", 00:21:58.231 "adrfam": "IPv4", 00:21:58.231 "traddr": "10.0.0.2", 00:21:58.231 "trsvcid": "4420", 00:21:58.231 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:58.231 }, 00:21:58.231 "ctrlr_data": { 00:21:58.231 "cntlid": 1, 00:21:58.231 "vendor_id": "0x8086", 00:21:58.231 "model_number": "SPDK bdev Controller", 00:21:58.231 "serial_number": "00000000000000000000", 00:21:58.231 "firmware_revision": "25.01", 00:21:58.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.231 "oacs": { 00:21:58.231 "security": 0, 00:21:58.231 "format": 0, 00:21:58.231 "firmware": 0, 00:21:58.231 "ns_manage": 0 00:21:58.231 }, 00:21:58.231 "multi_ctrlr": true, 00:21:58.231 "ana_reporting": false 00:21:58.231 }, 00:21:58.231 "vs": { 00:21:58.231 "nvme_version": "1.3" 00:21:58.231 }, 00:21:58.231 "ns_data": { 00:21:58.231 "id": 1, 00:21:58.231 "can_share": true 00:21:58.231 } 00:21:58.231 } 00:21:58.231 ], 00:21:58.231 "mp_policy": "active_passive" 00:21:58.231 } 00:21:58.231 } 00:21:58.231 ] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 [2024-11-20 10:38:58.708815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:58.231 [2024-11-20 10:38:58.708887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b220 (9): Bad file descriptor 00:21:58.231 [2024-11-20 10:38:58.841029] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 [ 00:21:58.231 { 00:21:58.231 "name": "nvme0n1", 00:21:58.231 "aliases": [ 00:21:58.231 "e29404f1-94ae-4edb-af2e-81bdb918ad2b" 00:21:58.231 ], 00:21:58.231 "product_name": "NVMe disk", 00:21:58.231 "block_size": 512, 00:21:58.231 "num_blocks": 2097152, 00:21:58.231 "uuid": "e29404f1-94ae-4edb-af2e-81bdb918ad2b", 00:21:58.231 "numa_id": 1, 00:21:58.231 "assigned_rate_limits": { 00:21:58.231 "rw_ios_per_sec": 0, 00:21:58.231 "rw_mbytes_per_sec": 0, 00:21:58.231 "r_mbytes_per_sec": 0, 00:21:58.231 "w_mbytes_per_sec": 0 00:21:58.231 }, 00:21:58.231 "claimed": false, 00:21:58.231 "zoned": false, 00:21:58.231 "supported_io_types": { 00:21:58.231 "read": true, 00:21:58.231 "write": true, 00:21:58.231 "unmap": false, 00:21:58.231 "flush": true, 00:21:58.231 "reset": true, 00:21:58.231 "nvme_admin": true, 00:21:58.231 "nvme_io": true, 00:21:58.232 "nvme_io_md": false, 00:21:58.232 "write_zeroes": true, 00:21:58.232 "zcopy": false, 00:21:58.232 "get_zone_info": false, 00:21:58.232 "zone_management": false, 00:21:58.232 "zone_append": false, 00:21:58.232 "compare": true, 00:21:58.232 "compare_and_write": true, 00:21:58.232 "abort": true, 00:21:58.232 "seek_hole": false, 00:21:58.232 "seek_data": false, 00:21:58.232 "copy": true, 00:21:58.232 "nvme_iov_md": false 00:21:58.232 }, 00:21:58.232 "memory_domains": [ 00:21:58.232 { 00:21:58.232 "dma_device_id": "system", 00:21:58.232 "dma_device_type": 1 00:21:58.232 } 00:21:58.232 ], 00:21:58.232 "driver_specific": { 00:21:58.232 "nvme": [ 00:21:58.232 { 00:21:58.232 "trid": { 00:21:58.232 "trtype": "TCP", 00:21:58.232 "adrfam": "IPv4", 00:21:58.232 "traddr": "10.0.0.2", 00:21:58.232 "trsvcid": "4420", 00:21:58.232 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:58.232 }, 00:21:58.232 "ctrlr_data": { 00:21:58.232 "cntlid": 2, 00:21:58.232 "vendor_id": "0x8086", 00:21:58.232 "model_number": "SPDK bdev Controller", 00:21:58.232 "serial_number": "00000000000000000000", 00:21:58.232 "firmware_revision": "25.01", 00:21:58.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.232 "oacs": { 00:21:58.232 "security": 0, 00:21:58.232 "format": 0, 00:21:58.232 "firmware": 0, 00:21:58.232 "ns_manage": 0 00:21:58.232 }, 00:21:58.232 "multi_ctrlr": true, 00:21:58.232 "ana_reporting": false 00:21:58.232 }, 00:21:58.232 "vs": { 00:21:58.232 "nvme_version": "1.3" 00:21:58.232 }, 00:21:58.232 "ns_data": { 00:21:58.232 "id": 1, 00:21:58.232 "can_share": true 00:21:58.232 } 00:21:58.232 } 00:21:58.232 ], 00:21:58.232 "mp_policy": "active_passive" 00:21:58.232 } 00:21:58.232 } 00:21:58.232 ] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sRIwq3kk3k 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sRIwq3kk3k 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.sRIwq3kk3k 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.232 [2024-11-20 10:38:58.913431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.232 [2024-11-20 10:38:58.913539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.232 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.232 [2024-11-20 10:38:58.933497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.491 nvme0n1 00:21:58.491 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.491 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:58.491 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.491 10:38:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.491 [ 00:21:58.491 { 00:21:58.491 "name": "nvme0n1", 00:21:58.491 "aliases": [ 00:21:58.491 "e29404f1-94ae-4edb-af2e-81bdb918ad2b" 00:21:58.491 ], 00:21:58.491 "product_name": "NVMe disk", 00:21:58.491 "block_size": 512, 00:21:58.491 "num_blocks": 2097152, 00:21:58.491 "uuid": "e29404f1-94ae-4edb-af2e-81bdb918ad2b", 00:21:58.491 "numa_id": 1, 00:21:58.491 "assigned_rate_limits": { 00:21:58.491 "rw_ios_per_sec": 0, 00:21:58.491 "rw_mbytes_per_sec": 0, 00:21:58.491 "r_mbytes_per_sec": 0, 00:21:58.491 "w_mbytes_per_sec": 0 00:21:58.491 }, 00:21:58.491 "claimed": false, 00:21:58.491 "zoned": false, 00:21:58.491 "supported_io_types": { 00:21:58.491 "read": true, 00:21:58.491 "write": true, 00:21:58.491 "unmap": false, 00:21:58.491 "flush": true, 00:21:58.491 "reset": true, 00:21:58.491 "nvme_admin": true, 00:21:58.491 "nvme_io": true, 00:21:58.491 "nvme_io_md": false, 00:21:58.491 "write_zeroes": true, 00:21:58.491 "zcopy": false, 00:21:58.491 "get_zone_info": false, 00:21:58.491 "zone_management": false, 00:21:58.491 "zone_append": false, 00:21:58.491 "compare": true, 00:21:58.491 "compare_and_write": true, 00:21:58.491 "abort": true, 00:21:58.491 "seek_hole": false, 00:21:58.491 "seek_data": false, 00:21:58.491 "copy": true, 00:21:58.491 "nvme_iov_md": false 00:21:58.491 }, 00:21:58.491 "memory_domains": [ 00:21:58.491 { 00:21:58.491 "dma_device_id": "system", 00:21:58.491 "dma_device_type": 1 00:21:58.491 } 00:21:58.491 ], 00:21:58.491 "driver_specific": { 00:21:58.491 "nvme": [ 00:21:58.491 { 00:21:58.491 "trid": { 00:21:58.491 "trtype": "TCP", 00:21:58.491 "adrfam": "IPv4", 00:21:58.491 "traddr": "10.0.0.2", 00:21:58.491 "trsvcid": "4421", 00:21:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:58.491 }, 00:21:58.491 "ctrlr_data": { 00:21:58.491 "cntlid": 3, 00:21:58.491 "vendor_id": "0x8086", 00:21:58.491 "model_number": "SPDK bdev Controller", 00:21:58.491 "serial_number": "00000000000000000000", 00:21:58.491 "firmware_revision": "25.01", 00:21:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.491 "oacs": { 00:21:58.491 "security": 0, 00:21:58.491 "format": 0, 00:21:58.491 "firmware": 0, 00:21:58.491 "ns_manage": 0 00:21:58.491 }, 00:21:58.491 "multi_ctrlr": true, 00:21:58.491 "ana_reporting": false 00:21:58.491 }, 00:21:58.491 "vs": { 00:21:58.491 "nvme_version": "1.3" 00:21:58.491 }, 00:21:58.491 "ns_data": { 00:21:58.491 "id": 1, 00:21:58.491 "can_share": true 00:21:58.491 } 00:21:58.491 } 00:21:58.491 ], 00:21:58.491 "mp_policy": "active_passive" 00:21:58.491 } 00:21:58.491 } 00:21:58.491 ] 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.sRIwq3kk3k 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.491 rmmod nvme_tcp 00:21:58.491 rmmod nvme_fabrics 00:21:58.491 rmmod nvme_keyring 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3563216 ']' 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3563216 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3563216 ']' 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3563216 00:21:58.491 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563216 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563216' 00:21:58.492 killing process with pid 3563216 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3563216 00:21:58.492 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3563216 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.751 10:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.654 10:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.654 00:22:00.654 real 0m9.411s 00:22:00.654 user 0m3.043s 00:22:00.654 sys 0m4.814s 00:22:00.654 10:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.654 10:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.654 ************************************ 00:22:00.654 END TEST nvmf_async_init 00:22:00.654 ************************************ 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.913 ************************************ 00:22:00.913 START TEST dma 00:22:00.913 ************************************ 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:00.913 * Looking for test storage... 00:22:00.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.913 --rc genhtml_branch_coverage=1 00:22:00.913 --rc genhtml_function_coverage=1 00:22:00.913 --rc genhtml_legend=1 00:22:00.913 --rc geninfo_all_blocks=1 00:22:00.913 --rc geninfo_unexecuted_blocks=1 00:22:00.913 00:22:00.913 ' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.913 --rc genhtml_branch_coverage=1 00:22:00.913 --rc genhtml_function_coverage=1 00:22:00.913 --rc genhtml_legend=1 00:22:00.913 --rc geninfo_all_blocks=1 00:22:00.913 --rc geninfo_unexecuted_blocks=1 00:22:00.913 00:22:00.913 ' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.913 --rc genhtml_branch_coverage=1 00:22:00.913 --rc genhtml_function_coverage=1 00:22:00.913 --rc genhtml_legend=1 00:22:00.913 --rc geninfo_all_blocks=1 00:22:00.913 --rc geninfo_unexecuted_blocks=1 00:22:00.913 00:22:00.913 ' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.913 --rc genhtml_branch_coverage=1 00:22:00.913 --rc genhtml_function_coverage=1 00:22:00.913 --rc genhtml_legend=1 00:22:00.913 --rc geninfo_all_blocks=1 00:22:00.913 --rc geninfo_unexecuted_blocks=1 00:22:00.913 00:22:00.913 ' 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.913 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:01.173 10:39:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:01.173 00:22:01.173 real 0m0.214s 00:22:01.173 user 0m0.127s 00:22:01.173 sys 0m0.101s 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:01.174 ************************************ 00:22:01.174 END TEST dma 00:22:01.174 ************************************ 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.174 ************************************ 00:22:01.174 START TEST nvmf_identify 00:22:01.174 ************************************ 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:01.174 * Looking for test storage... 00:22:01.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.174 --rc genhtml_branch_coverage=1 00:22:01.174 --rc genhtml_function_coverage=1 00:22:01.174 --rc genhtml_legend=1 00:22:01.174 --rc geninfo_all_blocks=1 00:22:01.174 --rc geninfo_unexecuted_blocks=1 00:22:01.174 00:22:01.174 ' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.174 --rc genhtml_branch_coverage=1 00:22:01.174 --rc genhtml_function_coverage=1 00:22:01.174 --rc genhtml_legend=1 00:22:01.174 --rc geninfo_all_blocks=1 00:22:01.174 --rc geninfo_unexecuted_blocks=1 00:22:01.174 00:22:01.174 ' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.174 --rc genhtml_branch_coverage=1 00:22:01.174 --rc genhtml_function_coverage=1 00:22:01.174 --rc genhtml_legend=1 00:22:01.174 --rc geninfo_all_blocks=1 00:22:01.174 --rc geninfo_unexecuted_blocks=1 00:22:01.174 00:22:01.174 ' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.174 --rc genhtml_branch_coverage=1 00:22:01.174 --rc genhtml_function_coverage=1 00:22:01.174 --rc genhtml_legend=1 00:22:01.174 --rc geninfo_all_blocks=1 00:22:01.174 --rc geninfo_unexecuted_blocks=1 00:22:01.174 00:22:01.174 ' 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.174 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.434 10:39:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:08.087 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.088 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.088 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.088 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.088 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:22:08.088 00:22:08.088 --- 10.0.0.2 ping statistics --- 00:22:08.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.088 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:22:08.088 00:22:08.088 --- 10.0.0.1 ping statistics --- 00:22:08.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.088 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3567187 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3567187 00:22:08.088 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3567187 ']' 00:22:08.089 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.089 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.089 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.089 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.089 10:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.089 [2024-11-20 10:39:07.932397] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:22:08.089 [2024-11-20 10:39:07.932449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.089 [2024-11-20 10:39:08.014278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.089 [2024-11-20 10:39:08.059320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.089 [2024-11-20 10:39:08.059362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.089 [2024-11-20 10:39:08.059371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.089 [2024-11-20 10:39:08.059377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.089 [2024-11-20 10:39:08.059382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.089 [2024-11-20 10:39:08.060926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.089 [2024-11-20 10:39:08.061036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.089 [2024-11-20 10:39:08.061068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.089 [2024-11-20 10:39:08.061069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.089 [2024-11-20 10:39:08.791743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.089 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.348 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:08.348 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.348 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.348 Malloc0 00:22:08.348 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.348 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.348 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.349 [2024-11-20 10:39:08.892405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.349 [ 00:22:08.349 { 00:22:08.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.349 "subtype": "Discovery", 00:22:08.349 "listen_addresses": [ 00:22:08.349 { 00:22:08.349 "trtype": "TCP", 00:22:08.349 "adrfam": "IPv4", 00:22:08.349 "traddr": "10.0.0.2", 00:22:08.349 "trsvcid": "4420" 00:22:08.349 } 00:22:08.349 ], 00:22:08.349 "allow_any_host": true, 00:22:08.349 "hosts": [] 00:22:08.349 }, 00:22:08.349 { 00:22:08.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.349 "subtype": "NVMe", 00:22:08.349 "listen_addresses": [ 00:22:08.349 { 00:22:08.349 "trtype": "TCP", 00:22:08.349 "adrfam": "IPv4", 00:22:08.349 "traddr": "10.0.0.2", 00:22:08.349 "trsvcid": "4420" 00:22:08.349 } 00:22:08.349 ], 00:22:08.349 "allow_any_host": true, 00:22:08.349 "hosts": [], 00:22:08.349 "serial_number": "SPDK00000000000001", 00:22:08.349 "model_number": "SPDK bdev Controller", 00:22:08.349 "max_namespaces": 32, 00:22:08.349 "min_cntlid": 1, 00:22:08.349 "max_cntlid": 65519, 00:22:08.349 "namespaces": [ 00:22:08.349 { 00:22:08.349 "nsid": 1, 00:22:08.349 "bdev_name": "Malloc0", 00:22:08.349 "name": "Malloc0", 00:22:08.349 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:08.349 "eui64": "ABCDEF0123456789", 00:22:08.349 "uuid": "c8f68e70-e3e7-4ddc-b5ae-f89903b1f465" 00:22:08.349 } 00:22:08.349 ] 00:22:08.349 } 00:22:08.349 ] 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.349 10:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:08.349 [2024-11-20 10:39:08.944544] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:22:08.349 [2024-11-20 10:39:08.944586] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3567553 ] 00:22:08.349 [2024-11-20 10:39:08.989988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:08.349 [2024-11-20 10:39:08.990034] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:08.349 [2024-11-20 10:39:08.990042] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:08.349 [2024-11-20 10:39:08.990054] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:08.349 [2024-11-20 10:39:08.990064] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:08.349 [2024-11-20 10:39:08.990587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:08.349 [2024-11-20 10:39:08.990618] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1859690 0 00:22:08.349 [2024-11-20 10:39:09.004962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:08.349 [2024-11-20 10:39:09.004977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:08.349 [2024-11-20 10:39:09.004982] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:08.349 [2024-11-20 10:39:09.004985] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:08.349 [2024-11-20 10:39:09.005018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.005023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.005027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.349 [2024-11-20 10:39:09.005039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:08.349 [2024-11-20 10:39:09.005056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.349 [2024-11-20 10:39:09.012959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.349 [2024-11-20 10:39:09.012969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.349 [2024-11-20 10:39:09.012973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.012977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.349 [2024-11-20 10:39:09.012988] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:08.349 [2024-11-20 10:39:09.012998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:08.349 [2024-11-20 10:39:09.013003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:08.349 [2024-11-20 10:39:09.013016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.349 [2024-11-20 10:39:09.013030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.349 [2024-11-20 10:39:09.013042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.349 [2024-11-20 10:39:09.013118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.349 [2024-11-20 10:39:09.013125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.349 [2024-11-20 10:39:09.013128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.349 [2024-11-20 10:39:09.013136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:08.349 [2024-11-20 10:39:09.013143] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:08.349 [2024-11-20 10:39:09.013149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.349 [2024-11-20 10:39:09.013162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.349 [2024-11-20 10:39:09.013172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.349 [2024-11-20 10:39:09.013236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.349 [2024-11-20 10:39:09.013242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.349 [2024-11-20 10:39:09.013245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.349 [2024-11-20 10:39:09.013253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:08.349 [2024-11-20 10:39:09.013260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:08.349 [2024-11-20 10:39:09.013266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.349 [2024-11-20 10:39:09.013278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.349 [2024-11-20 10:39:09.013288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.349 [2024-11-20 10:39:09.013353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.349 [2024-11-20 10:39:09.013359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.349 [2024-11-20 10:39:09.013362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.349 [2024-11-20 10:39:09.013370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:08.349 [2024-11-20 10:39:09.013381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.349 [2024-11-20 10:39:09.013388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.349 [2024-11-20 10:39:09.013394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.349 [2024-11-20 10:39:09.013403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.350 [2024-11-20 10:39:09.013462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.350 [2024-11-20 10:39:09.013467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.350 [2024-11-20 10:39:09.013470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.350 [2024-11-20 10:39:09.013478] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:08.350 [2024-11-20 10:39:09.013482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:08.350 [2024-11-20 10:39:09.013489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:08.350 [2024-11-20 10:39:09.013597] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:08.350 [2024-11-20 10:39:09.013601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:08.350 [2024-11-20 10:39:09.013610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.013623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.350 [2024-11-20 10:39:09.013633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.350 [2024-11-20 10:39:09.013694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.350 [2024-11-20 10:39:09.013700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.350 [2024-11-20 10:39:09.013703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.350 [2024-11-20 10:39:09.013711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:08.350 [2024-11-20 10:39:09.013719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.013732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.350 [2024-11-20 10:39:09.013740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.350 [2024-11-20 10:39:09.013805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.350 [2024-11-20 10:39:09.013811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.350 [2024-11-20 10:39:09.013814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.350 [2024-11-20 10:39:09.013823] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:08.350 [2024-11-20 10:39:09.013827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:08.350 [2024-11-20 10:39:09.013834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:08.350 [2024-11-20 10:39:09.013843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:08.350 [2024-11-20 10:39:09.013851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.013861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.350 [2024-11-20 10:39:09.013871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.350 [2024-11-20 10:39:09.013971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.350 [2024-11-20 10:39:09.013978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.350 [2024-11-20 10:39:09.013981] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.013985] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1859690): datao=0, datal=4096, cccid=0 00:22:08.350 [2024-11-20 10:39:09.013989] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bb100) on tqpair(0x1859690): expected_datao=0, payload_size=4096 00:22:08.350 [2024-11-20 10:39:09.013993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014000] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014003] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.350 [2024-11-20 10:39:09.014034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.350 [2024-11-20 10:39:09.014037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.350 [2024-11-20 10:39:09.014047] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:08.350 [2024-11-20 10:39:09.014052] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:08.350 [2024-11-20 10:39:09.014056] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:08.350 [2024-11-20 10:39:09.014064] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:08.350 [2024-11-20 10:39:09.014068] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:08.350 [2024-11-20 10:39:09.014072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:08.350 [2024-11-20 10:39:09.014082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:08.350 [2024-11-20 10:39:09.014089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.014102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.350 [2024-11-20 10:39:09.014115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.350 [2024-11-20 10:39:09.014185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.350 [2024-11-20 10:39:09.014190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.350 [2024-11-20 10:39:09.014193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.350 [2024-11-20 10:39:09.014204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.014216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.350 [2024-11-20 10:39:09.014221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.014233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.350 [2024-11-20 10:39:09.014238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.014249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.350 [2024-11-20 10:39:09.014254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.014266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.350 [2024-11-20 10:39:09.014270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:08.350 [2024-11-20 10:39:09.014278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:08.350 [2024-11-20 10:39:09.014284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1859690) 00:22:08.350 [2024-11-20 10:39:09.014293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.350 [2024-11-20 10:39:09.014304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb100, cid 0, qid 0 00:22:08.350 [2024-11-20 10:39:09.014308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb280, cid 1, qid 0 00:22:08.350 [2024-11-20 10:39:09.014313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb400, cid 2, qid 0 00:22:08.350 [2024-11-20 10:39:09.014317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.350 [2024-11-20 10:39:09.014320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb700, cid 4, qid 0 00:22:08.350 [2024-11-20 10:39:09.014411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.350 [2024-11-20 10:39:09.014416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.350 [2024-11-20 10:39:09.014419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.350 [2024-11-20 10:39:09.014424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb700) on tqpair=0x1859690 00:22:08.350 [2024-11-20 10:39:09.014431] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:08.350 [2024-11-20 10:39:09.014436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:08.350 [2024-11-20 10:39:09.014445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.014449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1859690) 00:22:08.351 [2024-11-20 10:39:09.014454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.351 [2024-11-20 10:39:09.014464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb700, cid 4, qid 0 00:22:08.351 [2024-11-20 10:39:09.014539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.351 [2024-11-20 10:39:09.014545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.351 [2024-11-20 10:39:09.014548] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.014551] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1859690): datao=0, datal=4096, cccid=4 00:22:08.351 [2024-11-20 10:39:09.014555] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bb700) on tqpair(0x1859690): expected_datao=0, payload_size=4096 00:22:08.351 [2024-11-20 10:39:09.014559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.014565] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.014568] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.351 [2024-11-20 10:39:09.055050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.351 [2024-11-20 10:39:09.055054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb700) on tqpair=0x1859690 00:22:08.351 [2024-11-20 10:39:09.055071] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:08.351 [2024-11-20 10:39:09.055095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1859690) 00:22:08.351 [2024-11-20 10:39:09.055107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.351 [2024-11-20 10:39:09.055113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1859690) 00:22:08.351 [2024-11-20 10:39:09.055125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.351 [2024-11-20 10:39:09.055140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb700, cid 4, qid 0 00:22:08.351 [2024-11-20 10:39:09.055145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb880, cid 5, qid 0 00:22:08.351 [2024-11-20 10:39:09.055269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.351 [2024-11-20 10:39:09.055275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.351 [2024-11-20 10:39:09.055278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055282] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1859690): datao=0, datal=1024, cccid=4 00:22:08.351 [2024-11-20 10:39:09.055285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bb700) on tqpair(0x1859690): expected_datao=0, payload_size=1024 00:22:08.351 [2024-11-20 10:39:09.055292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055298] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055301] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.351 [2024-11-20 10:39:09.055311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.351 [2024-11-20 10:39:09.055314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.351 [2024-11-20 10:39:09.055317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb880) on tqpair=0x1859690 00:22:08.628 [2024-11-20 10:39:09.096055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.628 [2024-11-20 10:39:09.096072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.628 [2024-11-20 10:39:09.096076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb700) on tqpair=0x1859690 00:22:08.628 [2024-11-20 10:39:09.096094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1859690) 00:22:08.628 [2024-11-20 10:39:09.096105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.628 [2024-11-20 10:39:09.096123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb700, cid 4, qid 0 00:22:08.628 [2024-11-20 10:39:09.096240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.628 [2024-11-20 10:39:09.096248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.628 [2024-11-20 10:39:09.096251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096256] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1859690): datao=0, datal=3072, cccid=4 00:22:08.628 [2024-11-20 10:39:09.096260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bb700) on tqpair(0x1859690): expected_datao=0, payload_size=3072 00:22:08.628 [2024-11-20 10:39:09.096264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096270] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.628 [2024-11-20 10:39:09.096304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.628 [2024-11-20 10:39:09.096307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb700) on tqpair=0x1859690 00:22:08.628 [2024-11-20 10:39:09.096318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1859690) 00:22:08.628 [2024-11-20 10:39:09.096327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.628 [2024-11-20 10:39:09.096341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb700, cid 4, qid 0 00:22:08.628 [2024-11-20 10:39:09.096411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.628 [2024-11-20 10:39:09.096417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.628 [2024-11-20 10:39:09.096420] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096423] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1859690): datao=0, datal=8, cccid=4 00:22:08.628 [2024-11-20 10:39:09.096427] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bb700) on tqpair(0x1859690): expected_datao=0, payload_size=8 00:22:08.628 [2024-11-20 10:39:09.096434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096440] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.096443] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.137104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.628 [2024-11-20 10:39:09.137115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.628 [2024-11-20 10:39:09.137118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.628 [2024-11-20 10:39:09.137122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb700) on tqpair=0x1859690 00:22:08.628 ===================================================== 00:22:08.629 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:08.629 ===================================================== 00:22:08.629 Controller Capabilities/Features 00:22:08.629 ================================ 00:22:08.629 Vendor ID: 0000 00:22:08.629 Subsystem Vendor ID: 0000 00:22:08.629 Serial Number: .................... 00:22:08.629 Model Number: ........................................ 00:22:08.629 Firmware Version: 25.01 00:22:08.629 Recommended Arb Burst: 0 00:22:08.629 IEEE OUI Identifier: 00 00 00 00:22:08.629 Multi-path I/O 00:22:08.629 May have multiple subsystem ports: No 00:22:08.629 May have multiple controllers: No 00:22:08.629 Associated with SR-IOV VF: No 00:22:08.629 Max Data Transfer Size: 131072 00:22:08.629 Max Number of Namespaces: 0 00:22:08.629 Max Number of I/O Queues: 1024 00:22:08.629 NVMe Specification Version (VS): 1.3 00:22:08.629 NVMe Specification Version (Identify): 1.3 00:22:08.629 Maximum Queue Entries: 128 00:22:08.629 Contiguous Queues Required: Yes 00:22:08.629 Arbitration Mechanisms Supported 00:22:08.629 Weighted Round Robin: Not Supported 00:22:08.629 Vendor Specific: Not Supported 00:22:08.629 Reset Timeout: 15000 ms 00:22:08.629 Doorbell Stride: 4 bytes 00:22:08.629 NVM Subsystem Reset: Not Supported 00:22:08.629 Command Sets Supported 00:22:08.629 NVM Command Set: Supported 00:22:08.629 Boot Partition: Not Supported 00:22:08.629 Memory Page Size Minimum: 4096 bytes 00:22:08.629 Memory Page Size Maximum: 4096 bytes 00:22:08.629 Persistent Memory Region: Not Supported 00:22:08.629 Optional Asynchronous Events Supported 00:22:08.629 Namespace Attribute Notices: Not Supported 00:22:08.629 Firmware Activation Notices: Not Supported 00:22:08.629 ANA Change Notices: Not Supported 00:22:08.629 PLE Aggregate Log Change Notices: Not Supported 00:22:08.629 LBA Status Info Alert Notices: Not Supported 00:22:08.629 EGE Aggregate Log Change Notices: Not Supported 00:22:08.629 Normal NVM Subsystem Shutdown event: Not Supported 00:22:08.629 Zone Descriptor Change Notices: Not Supported 00:22:08.629 Discovery Log Change Notices: Supported 00:22:08.629 Controller Attributes 00:22:08.629 128-bit Host Identifier: Not Supported 00:22:08.629 Non-Operational Permissive Mode: Not Supported 00:22:08.629 NVM Sets: Not Supported 00:22:08.629 Read Recovery Levels: Not Supported 00:22:08.629 Endurance Groups: Not Supported 00:22:08.629 Predictable Latency Mode: Not Supported 00:22:08.629 Traffic Based Keep ALive: Not Supported 00:22:08.629 Namespace Granularity: Not Supported 00:22:08.629 SQ Associations: Not Supported 00:22:08.629 UUID List: Not Supported 00:22:08.629 Multi-Domain Subsystem: Not Supported 00:22:08.629 Fixed Capacity Management: Not Supported 00:22:08.629 Variable Capacity Management: Not Supported 00:22:08.629 Delete Endurance Group: Not Supported 00:22:08.629 Delete NVM Set: Not Supported 00:22:08.629 Extended LBA Formats Supported: Not Supported 00:22:08.629 Flexible Data Placement Supported: Not Supported 00:22:08.629 00:22:08.629 Controller Memory Buffer Support 00:22:08.629 ================================ 00:22:08.629 Supported: No 00:22:08.629 00:22:08.629 Persistent Memory Region Support 00:22:08.629 ================================ 00:22:08.629 Supported: No 00:22:08.629 00:22:08.629 Admin Command Set Attributes 00:22:08.629 ============================ 00:22:08.629 Security Send/Receive: Not Supported 00:22:08.629 Format NVM: Not Supported 00:22:08.629 Firmware Activate/Download: Not Supported 00:22:08.629 Namespace Management: Not Supported 00:22:08.629 Device Self-Test: Not Supported 00:22:08.629 Directives: Not Supported 00:22:08.629 NVMe-MI: Not Supported 00:22:08.629 Virtualization Management: Not Supported 00:22:08.629 Doorbell Buffer Config: Not Supported 00:22:08.629 Get LBA Status Capability: Not Supported 00:22:08.629 Command & Feature Lockdown Capability: Not Supported 00:22:08.629 Abort Command Limit: 1 00:22:08.629 Async Event Request Limit: 4 00:22:08.629 Number of Firmware Slots: N/A 00:22:08.629 Firmware Slot 1 Read-Only: N/A 00:22:08.629 Firmware Activation Without Reset: N/A 00:22:08.629 Multiple Update Detection Support: N/A 00:22:08.629 Firmware Update Granularity: No Information Provided 00:22:08.629 Per-Namespace SMART Log: No 00:22:08.629 Asymmetric Namespace Access Log Page: Not Supported 00:22:08.629 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:08.629 Command Effects Log Page: Not Supported 00:22:08.629 Get Log Page Extended Data: Supported 00:22:08.629 Telemetry Log Pages: Not Supported 00:22:08.629 Persistent Event Log Pages: Not Supported 00:22:08.629 Supported Log Pages Log Page: May Support 00:22:08.629 Commands Supported & Effects Log Page: Not Supported 00:22:08.629 Feature Identifiers & Effects Log Page:May Support 00:22:08.629 NVMe-MI Commands & Effects Log Page: May Support 00:22:08.629 Data Area 4 for Telemetry Log: Not Supported 00:22:08.629 Error Log Page Entries Supported: 128 00:22:08.629 Keep Alive: Not Supported 00:22:08.629 00:22:08.629 NVM Command Set Attributes 00:22:08.629 ========================== 00:22:08.629 Submission Queue Entry Size 00:22:08.629 Max: 1 00:22:08.629 Min: 1 00:22:08.629 Completion Queue Entry Size 00:22:08.629 Max: 1 00:22:08.629 Min: 1 00:22:08.629 Number of Namespaces: 0 00:22:08.629 Compare Command: Not Supported 00:22:08.629 Write Uncorrectable Command: Not Supported 00:22:08.629 Dataset Management Command: Not Supported 00:22:08.629 Write Zeroes Command: Not Supported 00:22:08.629 Set Features Save Field: Not Supported 00:22:08.629 Reservations: Not Supported 00:22:08.629 Timestamp: Not Supported 00:22:08.629 Copy: Not Supported 00:22:08.629 Volatile Write Cache: Not Present 00:22:08.629 Atomic Write Unit (Normal): 1 00:22:08.629 Atomic Write Unit (PFail): 1 00:22:08.629 Atomic Compare & Write Unit: 1 00:22:08.629 Fused Compare & Write: Supported 00:22:08.629 Scatter-Gather List 00:22:08.629 SGL Command Set: Supported 00:22:08.629 SGL Keyed: Supported 00:22:08.629 SGL Bit Bucket Descriptor: Not Supported 00:22:08.629 SGL Metadata Pointer: Not Supported 00:22:08.629 Oversized SGL: Not Supported 00:22:08.629 SGL Metadata Address: Not Supported 00:22:08.629 SGL Offset: Supported 00:22:08.629 Transport SGL Data Block: Not Supported 00:22:08.629 Replay Protected Memory Block: Not Supported 00:22:08.629 00:22:08.629 Firmware Slot Information 00:22:08.629 ========================= 00:22:08.629 Active slot: 0 00:22:08.629 00:22:08.629 00:22:08.629 Error Log 00:22:08.629 ========= 00:22:08.629 00:22:08.629 Active Namespaces 00:22:08.629 ================= 00:22:08.629 Discovery Log Page 00:22:08.629 ================== 00:22:08.629 Generation Counter: 2 00:22:08.629 Number of Records: 2 00:22:08.629 Record Format: 0 00:22:08.629 00:22:08.629 Discovery Log Entry 0 00:22:08.629 ---------------------- 00:22:08.629 Transport Type: 3 (TCP) 00:22:08.629 Address Family: 1 (IPv4) 00:22:08.629 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:08.629 Entry Flags: 00:22:08.629 Duplicate Returned Information: 1 00:22:08.629 Explicit Persistent Connection Support for Discovery: 1 00:22:08.629 Transport Requirements: 00:22:08.629 Secure Channel: Not Required 00:22:08.629 Port ID: 0 (0x0000) 00:22:08.629 Controller ID: 65535 (0xffff) 00:22:08.629 Admin Max SQ Size: 128 00:22:08.629 Transport Service Identifier: 4420 00:22:08.629 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:08.629 Transport Address: 10.0.0.2 00:22:08.629 Discovery Log Entry 1 00:22:08.629 ---------------------- 00:22:08.629 Transport Type: 3 (TCP) 00:22:08.629 Address Family: 1 (IPv4) 00:22:08.629 Subsystem Type: 2 (NVM Subsystem) 00:22:08.629 Entry Flags: 00:22:08.629 Duplicate Returned Information: 0 00:22:08.629 Explicit Persistent Connection Support for Discovery: 0 00:22:08.629 Transport Requirements: 00:22:08.629 Secure Channel: Not Required 00:22:08.629 Port ID: 0 (0x0000) 00:22:08.629 Controller ID: 65535 (0xffff) 00:22:08.629 Admin Max SQ Size: 128 00:22:08.629 Transport Service Identifier: 4420 00:22:08.629 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:08.629 Transport Address: 10.0.0.2 [2024-11-20 10:39:09.137210] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:08.629 [2024-11-20 10:39:09.137221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb100) on tqpair=0x1859690 00:22:08.629 [2024-11-20 10:39:09.137227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.629 [2024-11-20 10:39:09.137232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb280) on tqpair=0x1859690 00:22:08.629 [2024-11-20 10:39:09.137236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.629 [2024-11-20 10:39:09.137241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb400) on tqpair=0x1859690 00:22:08.629 [2024-11-20 10:39:09.137245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.629 [2024-11-20 10:39:09.137249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.629 [2024-11-20 10:39:09.137254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.629 [2024-11-20 10:39:09.137264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.629 [2024-11-20 10:39:09.137279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.629 [2024-11-20 10:39:09.137293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.629 [2024-11-20 10:39:09.137356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.629 [2024-11-20 10:39:09.137363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.629 [2024-11-20 10:39:09.137366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.629 [2024-11-20 10:39:09.137375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.629 [2024-11-20 10:39:09.137388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.629 [2024-11-20 10:39:09.137401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.629 [2024-11-20 10:39:09.137478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.629 [2024-11-20 10:39:09.137484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.629 [2024-11-20 10:39:09.137487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.629 [2024-11-20 10:39:09.137495] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:08.629 [2024-11-20 10:39:09.137501] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:08.629 [2024-11-20 10:39:09.137509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.629 [2024-11-20 10:39:09.137516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.629 [2024-11-20 10:39:09.137522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.629 [2024-11-20 10:39:09.137531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.629 [2024-11-20 10:39:09.137595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.137601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.137605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.137616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.137629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.137639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.137711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.137717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.137720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.137732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.137745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.137754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.137832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.137837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.137840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.137853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.137866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.137876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.137953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.137960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.137963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.137976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.137984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.137990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.138836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.138842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.138845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.138857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.138865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.138871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.138881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.142956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.142965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.142968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.142972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.142981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.142984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.142988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1859690) 00:22:08.630 [2024-11-20 10:39:09.142994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.630 [2024-11-20 10:39:09.143005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bb580, cid 3, qid 0 00:22:08.630 [2024-11-20 10:39:09.143085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.143091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.143095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.143098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bb580) on tqpair=0x1859690 00:22:08.630 [2024-11-20 10:39:09.143104] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:08.630 00:22:08.630 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:08.630 [2024-11-20 10:39:09.182494] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:22:08.630 [2024-11-20 10:39:09.182530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3567712 ] 00:22:08.630 [2024-11-20 10:39:09.221585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:08.630 [2024-11-20 10:39:09.221627] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:08.630 [2024-11-20 10:39:09.221632] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:08.630 [2024-11-20 10:39:09.221642] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:08.630 [2024-11-20 10:39:09.221650] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:08.630 [2024-11-20 10:39:09.225121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:08.630 [2024-11-20 10:39:09.225150] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x861690 0 00:22:08.630 [2024-11-20 10:39:09.232037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:08.630 [2024-11-20 10:39:09.232050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:08.630 [2024-11-20 10:39:09.232054] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:08.630 [2024-11-20 10:39:09.232060] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:08.630 [2024-11-20 10:39:09.232082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.232087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.232091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.630 [2024-11-20 10:39:09.232101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:08.630 [2024-11-20 10:39:09.232116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.630 [2024-11-20 10:39:09.239956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.630 [2024-11-20 10:39:09.239964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.630 [2024-11-20 10:39:09.239967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.630 [2024-11-20 10:39:09.239971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.630 [2024-11-20 10:39:09.239979] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:08.631 [2024-11-20 10:39:09.239985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:08.631 [2024-11-20 10:39:09.239990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:08.631 [2024-11-20 10:39:09.240001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.631 [2024-11-20 10:39:09.240015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.631 [2024-11-20 10:39:09.240027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.631 [2024-11-20 10:39:09.240192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.631 [2024-11-20 10:39:09.240198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.631 [2024-11-20 10:39:09.240201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.631 [2024-11-20 10:39:09.240209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:08.631 [2024-11-20 10:39:09.240215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:08.631 [2024-11-20 10:39:09.240222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.631 [2024-11-20 10:39:09.240234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.631 [2024-11-20 10:39:09.240244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.631 [2024-11-20 10:39:09.240308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.631 [2024-11-20 10:39:09.240314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.631 [2024-11-20 10:39:09.240316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.631 [2024-11-20 10:39:09.240325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:08.631 [2024-11-20 10:39:09.240331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:08.631 [2024-11-20 10:39:09.240339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.631 [2024-11-20 10:39:09.240352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.631 [2024-11-20 10:39:09.240362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.631 [2024-11-20 10:39:09.240423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.631 [2024-11-20 10:39:09.240428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.631 [2024-11-20 10:39:09.240431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.631 [2024-11-20 10:39:09.240439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:08.631 [2024-11-20 10:39:09.240447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.631 [2024-11-20 10:39:09.240460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.631 [2024-11-20 10:39:09.240469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.631 [2024-11-20 10:39:09.240531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.631 [2024-11-20 10:39:09.240536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.631 [2024-11-20 10:39:09.240540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.631 [2024-11-20 10:39:09.240543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.631 [2024-11-20 10:39:09.240547] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:08.631 [2024-11-20 10:39:09.240551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:08.631 [2024-11-20 10:39:09.240558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:08.631 [2024-11-20 10:39:09.240666] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:08.631 [2024-11-20 10:39:09.240671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:08.632 [2024-11-20 10:39:09.240677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.632 [2024-11-20 10:39:09.240681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.632 [2024-11-20 10:39:09.240684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.632 [2024-11-20 10:39:09.240689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.632 [2024-11-20 10:39:09.240699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.632 [2024-11-20 10:39:09.240764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.632 [2024-11-20 10:39:09.240769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.632 [2024-11-20 10:39:09.240772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.632 [2024-11-20 10:39:09.240776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.632 [2024-11-20 10:39:09.240780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:08.632 [2024-11-20 10:39:09.240790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.632 [2024-11-20 10:39:09.240794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.632 [2024-11-20 10:39:09.240797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.632 [2024-11-20 10:39:09.240802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.632 [2024-11-20 10:39:09.240812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.632 [2024-11-20 10:39:09.240878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.632 [2024-11-20 10:39:09.240884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.632 [2024-11-20 10:39:09.240887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.632 [2024-11-20 10:39:09.240891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.633 [2024-11-20 10:39:09.240894] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:08.633 [2024-11-20 10:39:09.240899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.240906] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:08.633 [2024-11-20 10:39:09.240918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.240926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.240929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.240936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.633 [2024-11-20 10:39:09.240945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.633 [2024-11-20 10:39:09.241054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.633 [2024-11-20 10:39:09.241059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.633 [2024-11-20 10:39:09.241062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.241065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=4096, cccid=0 00:22:08.633 [2024-11-20 10:39:09.241069] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3100) on tqpair(0x861690): expected_datao=0, payload_size=4096 00:22:08.633 [2024-11-20 10:39:09.241073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.241088] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.241092] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.284957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.633 [2024-11-20 10:39:09.284969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.633 [2024-11-20 10:39:09.284973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.284977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.633 [2024-11-20 10:39:09.284984] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:08.633 [2024-11-20 10:39:09.284989] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:08.633 [2024-11-20 10:39:09.284993] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:08.633 [2024-11-20 10:39:09.285000] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:08.633 [2024-11-20 10:39:09.285007] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:08.633 [2024-11-20 10:39:09.285011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.633 [2024-11-20 10:39:09.285054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.633 [2024-11-20 10:39:09.285210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.633 [2024-11-20 10:39:09.285216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.633 [2024-11-20 10:39:09.285219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.633 [2024-11-20 10:39:09.285228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.633 [2024-11-20 10:39:09.285245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.633 [2024-11-20 10:39:09.285262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.633 [2024-11-20 10:39:09.285279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.633 [2024-11-20 10:39:09.285294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.633 [2024-11-20 10:39:09.285330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3100, cid 0, qid 0 00:22:08.633 [2024-11-20 10:39:09.285335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3280, cid 1, qid 0 00:22:08.633 [2024-11-20 10:39:09.285339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3400, cid 2, qid 0 00:22:08.633 [2024-11-20 10:39:09.285343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.633 [2024-11-20 10:39:09.285347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.633 [2024-11-20 10:39:09.285448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.633 [2024-11-20 10:39:09.285454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.633 [2024-11-20 10:39:09.285457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.633 [2024-11-20 10:39:09.285467] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:08.633 [2024-11-20 10:39:09.285471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:08.633 [2024-11-20 10:39:09.285490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.633 [2024-11-20 10:39:09.285496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.633 [2024-11-20 10:39:09.285502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.633 [2024-11-20 10:39:09.285512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.633 [2024-11-20 10:39:09.285570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.633 [2024-11-20 10:39:09.285576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.634 [2024-11-20 10:39:09.285579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.285583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.634 [2024-11-20 10:39:09.285637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:08.634 [2024-11-20 10:39:09.285646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:08.634 [2024-11-20 10:39:09.285653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.285656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.634 [2024-11-20 10:39:09.285662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.634 [2024-11-20 10:39:09.285672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.634 [2024-11-20 10:39:09.285745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.634 [2024-11-20 10:39:09.285751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.634 [2024-11-20 10:39:09.285755] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.285758] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=4096, cccid=4 00:22:08.634 [2024-11-20 10:39:09.285762] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3700) on tqpair(0x861690): expected_datao=0, payload_size=4096 00:22:08.634 [2024-11-20 10:39:09.285767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.285780] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.285784] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.326089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.634 [2024-11-20 10:39:09.326100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.634 [2024-11-20 10:39:09.326103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.634 [2024-11-20 10:39:09.326107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.634 [2024-11-20 10:39:09.326116] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:08.635 [2024-11-20 10:39:09.326126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:08.635 [2024-11-20 10:39:09.326134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:08.635 [2024-11-20 10:39:09.326141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.635 [2024-11-20 10:39:09.326145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.635 [2024-11-20 10:39:09.326152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.635 [2024-11-20 10:39:09.326164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.635 [2024-11-20 10:39:09.326249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.635 [2024-11-20 10:39:09.326256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.635 [2024-11-20 10:39:09.326259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.635 [2024-11-20 10:39:09.326262] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=4096, cccid=4 00:22:08.635 [2024-11-20 10:39:09.326266] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3700) on tqpair(0x861690): expected_datao=0, payload_size=4096 00:22:08.635 [2024-11-20 10:39:09.326270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.635 [2024-11-20 10:39:09.326280] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.635 [2024-11-20 10:39:09.326286] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.367153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.367165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.367169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.367172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.367187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.367196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.367204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.367207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.367214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.367226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.899 [2024-11-20 10:39:09.367294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.899 [2024-11-20 10:39:09.367300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.899 [2024-11-20 10:39:09.367306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.367309] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=4096, cccid=4 00:22:08.899 [2024-11-20 10:39:09.367314] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3700) on tqpair(0x861690): expected_datao=0, payload_size=4096 00:22:08.899 [2024-11-20 10:39:09.367317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.367328] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.367331] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.411960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.411973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.411976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.411980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.411988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.411997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.412005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.412011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.412015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.412020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.412024] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:08.899 [2024-11-20 10:39:09.412029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:08.899 [2024-11-20 10:39:09.412033] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:08.899 [2024-11-20 10:39:09.412047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.899 [2024-11-20 10:39:09.412090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.899 [2024-11-20 10:39:09.412095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3880, cid 5, qid 0 00:22:08.899 [2024-11-20 10:39:09.412173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.412179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.412182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.412194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.412199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.412202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3880) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.412213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3880, cid 5, qid 0 00:22:08.899 [2024-11-20 10:39:09.412303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.412309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.412312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3880) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.412323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3880, cid 5, qid 0 00:22:08.899 [2024-11-20 10:39:09.412412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.412418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.412421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3880) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.412432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3880, cid 5, qid 0 00:22:08.899 [2024-11-20 10:39:09.412511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.899 [2024-11-20 10:39:09.412517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.899 [2024-11-20 10:39:09.412520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3880) on tqpair=0x861690 00:22:08.899 [2024-11-20 10:39:09.412536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x861690) 00:22:08.899 [2024-11-20 10:39:09.412580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.899 [2024-11-20 10:39:09.412587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.899 [2024-11-20 10:39:09.412590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x861690) 00:22:08.900 [2024-11-20 10:39:09.412595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.900 [2024-11-20 10:39:09.412606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3880, cid 5, qid 0 00:22:08.900 [2024-11-20 10:39:09.412610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3700, cid 4, qid 0 00:22:08.900 [2024-11-20 10:39:09.412614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3a00, cid 6, qid 0 00:22:08.900 [2024-11-20 10:39:09.412618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3b80, cid 7, qid 0 00:22:08.900 [2024-11-20 10:39:09.412754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.900 [2024-11-20 10:39:09.412760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.900 [2024-11-20 10:39:09.412763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412766] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=8192, cccid=5 00:22:08.900 [2024-11-20 10:39:09.412770] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3880) on tqpair(0x861690): expected_datao=0, payload_size=8192 00:22:08.900 [2024-11-20 10:39:09.412774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412800] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.900 [2024-11-20 10:39:09.412810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.900 [2024-11-20 10:39:09.412813] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412816] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=512, cccid=4 00:22:08.900 [2024-11-20 10:39:09.412820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3700) on tqpair(0x861690): expected_datao=0, payload_size=512 00:22:08.900 [2024-11-20 10:39:09.412823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412829] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412832] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.900 [2024-11-20 10:39:09.412842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.900 [2024-11-20 10:39:09.412845] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412848] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=512, cccid=6 00:22:08.900 [2024-11-20 10:39:09.412851] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3a00) on tqpair(0x861690): expected_datao=0, payload_size=512 00:22:08.900 [2024-11-20 10:39:09.412855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412861] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.900 [2024-11-20 10:39:09.412873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.900 [2024-11-20 10:39:09.412876] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412881] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861690): datao=0, datal=4096, cccid=7 00:22:08.900 [2024-11-20 10:39:09.412885] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c3b80) on tqpair(0x861690): expected_datao=0, payload_size=4096 00:22:08.900 [2024-11-20 10:39:09.412889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412894] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412897] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.900 [2024-11-20 10:39:09.412909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.900 [2024-11-20 10:39:09.412912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3880) on tqpair=0x861690 00:22:08.900 [2024-11-20 10:39:09.412926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.900 [2024-11-20 10:39:09.412931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.900 [2024-11-20 10:39:09.412934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3700) on tqpair=0x861690 00:22:08.900 [2024-11-20 10:39:09.412946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.900 [2024-11-20 10:39:09.412957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.900 [2024-11-20 10:39:09.412960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3a00) on tqpair=0x861690 00:22:08.900 [2024-11-20 10:39:09.412969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.900 [2024-11-20 10:39:09.412974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.900 [2024-11-20 10:39:09.412977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.900 [2024-11-20 10:39:09.412981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3b80) on tqpair=0x861690 00:22:08.900 ===================================================== 00:22:08.900 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.900 ===================================================== 00:22:08.900 Controller Capabilities/Features 00:22:08.900 ================================ 00:22:08.900 Vendor ID: 8086 00:22:08.900 Subsystem Vendor ID: 8086 00:22:08.900 Serial Number: SPDK00000000000001 00:22:08.900 Model Number: SPDK bdev Controller 00:22:08.900 Firmware Version: 25.01 00:22:08.900 Recommended Arb Burst: 6 00:22:08.900 IEEE OUI Identifier: e4 d2 5c 00:22:08.900 Multi-path I/O 00:22:08.900 May have multiple subsystem ports: Yes 00:22:08.900 May have multiple controllers: Yes 00:22:08.900 Associated with SR-IOV VF: No 00:22:08.900 Max Data Transfer Size: 131072 00:22:08.900 Max Number of Namespaces: 32 00:22:08.900 Max Number of I/O Queues: 127 00:22:08.900 NVMe Specification Version (VS): 1.3 00:22:08.900 NVMe Specification Version (Identify): 1.3 00:22:08.900 Maximum Queue Entries: 128 00:22:08.900 Contiguous Queues Required: Yes 00:22:08.900 Arbitration Mechanisms Supported 00:22:08.900 Weighted Round Robin: Not Supported 00:22:08.900 Vendor Specific: Not Supported 00:22:08.900 Reset Timeout: 15000 ms 00:22:08.900 Doorbell Stride: 4 bytes 00:22:08.900 NVM Subsystem Reset: Not Supported 00:22:08.900 Command Sets Supported 00:22:08.900 NVM Command Set: Supported 00:22:08.900 Boot Partition: Not Supported 00:22:08.900 Memory Page Size Minimum: 4096 bytes 00:22:08.900 Memory Page Size Maximum: 4096 bytes 00:22:08.900 Persistent Memory Region: Not Supported 00:22:08.900 Optional Asynchronous Events Supported 00:22:08.900 Namespace Attribute Notices: Supported 00:22:08.900 Firmware Activation Notices: Not Supported 00:22:08.900 ANA Change Notices: Not Supported 00:22:08.900 PLE Aggregate Log Change Notices: Not Supported 00:22:08.900 LBA Status Info Alert Notices: Not Supported 00:22:08.900 EGE Aggregate Log Change Notices: Not Supported 00:22:08.900 Normal NVM Subsystem Shutdown event: Not Supported 00:22:08.900 Zone Descriptor Change Notices: Not Supported 00:22:08.900 Discovery Log Change Notices: Not Supported 00:22:08.900 Controller Attributes 00:22:08.900 128-bit Host Identifier: Supported 00:22:08.900 Non-Operational Permissive Mode: Not Supported 00:22:08.900 NVM Sets: Not Supported 00:22:08.900 Read Recovery Levels: Not Supported 00:22:08.900 Endurance Groups: Not Supported 00:22:08.900 Predictable Latency Mode: Not Supported 00:22:08.900 Traffic Based Keep ALive: Not Supported 00:22:08.900 Namespace Granularity: Not Supported 00:22:08.900 SQ Associations: Not Supported 00:22:08.900 UUID List: Not Supported 00:22:08.900 Multi-Domain Subsystem: Not Supported 00:22:08.900 Fixed Capacity Management: Not Supported 00:22:08.900 Variable Capacity Management: Not Supported 00:22:08.900 Delete Endurance Group: Not Supported 00:22:08.900 Delete NVM Set: Not Supported 00:22:08.900 Extended LBA Formats Supported: Not Supported 00:22:08.900 Flexible Data Placement Supported: Not Supported 00:22:08.900 00:22:08.900 Controller Memory Buffer Support 00:22:08.900 ================================ 00:22:08.900 Supported: No 00:22:08.900 00:22:08.900 Persistent Memory Region Support 00:22:08.900 ================================ 00:22:08.900 Supported: No 00:22:08.900 00:22:08.900 Admin Command Set Attributes 00:22:08.900 ============================ 00:22:08.900 Security Send/Receive: Not Supported 00:22:08.900 Format NVM: Not Supported 00:22:08.900 Firmware Activate/Download: Not Supported 00:22:08.900 Namespace Management: Not Supported 00:22:08.900 Device Self-Test: Not Supported 00:22:08.900 Directives: Not Supported 00:22:08.900 NVMe-MI: Not Supported 00:22:08.900 Virtualization Management: Not Supported 00:22:08.900 Doorbell Buffer Config: Not Supported 00:22:08.900 Get LBA Status Capability: Not Supported 00:22:08.901 Command & Feature Lockdown Capability: Not Supported 00:22:08.901 Abort Command Limit: 4 00:22:08.901 Async Event Request Limit: 4 00:22:08.901 Number of Firmware Slots: N/A 00:22:08.901 Firmware Slot 1 Read-Only: N/A 00:22:08.901 Firmware Activation Without Reset: N/A 00:22:08.901 Multiple Update Detection Support: N/A 00:22:08.901 Firmware Update Granularity: No Information Provided 00:22:08.901 Per-Namespace SMART Log: No 00:22:08.901 Asymmetric Namespace Access Log Page: Not Supported 00:22:08.901 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:08.901 Command Effects Log Page: Supported 00:22:08.901 Get Log Page Extended Data: Supported 00:22:08.901 Telemetry Log Pages: Not Supported 00:22:08.901 Persistent Event Log Pages: Not Supported 00:22:08.901 Supported Log Pages Log Page: May Support 00:22:08.901 Commands Supported & Effects Log Page: Not Supported 00:22:08.901 Feature Identifiers & Effects Log Page:May Support 00:22:08.901 NVMe-MI Commands & Effects Log Page: May Support 00:22:08.901 Data Area 4 for Telemetry Log: Not Supported 00:22:08.901 Error Log Page Entries Supported: 128 00:22:08.901 Keep Alive: Supported 00:22:08.901 Keep Alive Granularity: 10000 ms 00:22:08.901 00:22:08.901 NVM Command Set Attributes 00:22:08.901 ========================== 00:22:08.901 Submission Queue Entry Size 00:22:08.901 Max: 64 00:22:08.901 Min: 64 00:22:08.901 Completion Queue Entry Size 00:22:08.901 Max: 16 00:22:08.901 Min: 16 00:22:08.901 Number of Namespaces: 32 00:22:08.901 Compare Command: Supported 00:22:08.901 Write Uncorrectable Command: Not Supported 00:22:08.901 Dataset Management Command: Supported 00:22:08.901 Write Zeroes Command: Supported 00:22:08.901 Set Features Save Field: Not Supported 00:22:08.901 Reservations: Supported 00:22:08.901 Timestamp: Not Supported 00:22:08.901 Copy: Supported 00:22:08.901 Volatile Write Cache: Present 00:22:08.901 Atomic Write Unit (Normal): 1 00:22:08.901 Atomic Write Unit (PFail): 1 00:22:08.901 Atomic Compare & Write Unit: 1 00:22:08.901 Fused Compare & Write: Supported 00:22:08.901 Scatter-Gather List 00:22:08.901 SGL Command Set: Supported 00:22:08.901 SGL Keyed: Supported 00:22:08.901 SGL Bit Bucket Descriptor: Not Supported 00:22:08.901 SGL Metadata Pointer: Not Supported 00:22:08.901 Oversized SGL: Not Supported 00:22:08.901 SGL Metadata Address: Not Supported 00:22:08.901 SGL Offset: Supported 00:22:08.901 Transport SGL Data Block: Not Supported 00:22:08.901 Replay Protected Memory Block: Not Supported 00:22:08.901 00:22:08.901 Firmware Slot Information 00:22:08.901 ========================= 00:22:08.901 Active slot: 1 00:22:08.901 Slot 1 Firmware Revision: 25.01 00:22:08.901 00:22:08.901 00:22:08.901 Commands Supported and Effects 00:22:08.901 ============================== 00:22:08.901 Admin Commands 00:22:08.901 -------------- 00:22:08.901 Get Log Page (02h): Supported 00:22:08.901 Identify (06h): Supported 00:22:08.901 Abort (08h): Supported 00:22:08.901 Set Features (09h): Supported 00:22:08.901 Get Features (0Ah): Supported 00:22:08.901 Asynchronous Event Request (0Ch): Supported 00:22:08.901 Keep Alive (18h): Supported 00:22:08.901 I/O Commands 00:22:08.901 ------------ 00:22:08.901 Flush (00h): Supported LBA-Change 00:22:08.901 Write (01h): Supported LBA-Change 00:22:08.901 Read (02h): Supported 00:22:08.901 Compare (05h): Supported 00:22:08.901 Write Zeroes (08h): Supported LBA-Change 00:22:08.901 Dataset Management (09h): Supported LBA-Change 00:22:08.901 Copy (19h): Supported LBA-Change 00:22:08.901 00:22:08.901 Error Log 00:22:08.901 ========= 00:22:08.901 00:22:08.901 Arbitration 00:22:08.901 =========== 00:22:08.901 Arbitration Burst: 1 00:22:08.901 00:22:08.901 Power Management 00:22:08.901 ================ 00:22:08.901 Number of Power States: 1 00:22:08.901 Current Power State: Power State #0 00:22:08.901 Power State #0: 00:22:08.901 Max Power: 0.00 W 00:22:08.901 Non-Operational State: Operational 00:22:08.901 Entry Latency: Not Reported 00:22:08.901 Exit Latency: Not Reported 00:22:08.901 Relative Read Throughput: 0 00:22:08.901 Relative Read Latency: 0 00:22:08.901 Relative Write Throughput: 0 00:22:08.901 Relative Write Latency: 0 00:22:08.901 Idle Power: Not Reported 00:22:08.901 Active Power: Not Reported 00:22:08.901 Non-Operational Permissive Mode: Not Supported 00:22:08.901 00:22:08.901 Health Information 00:22:08.901 ================== 00:22:08.901 Critical Warnings: 00:22:08.901 Available Spare Space: OK 00:22:08.901 Temperature: OK 00:22:08.901 Device Reliability: OK 00:22:08.901 Read Only: No 00:22:08.901 Volatile Memory Backup: OK 00:22:08.901 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:08.901 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:08.901 Available Spare: 0% 00:22:08.901 Available Spare Threshold: 0% 00:22:08.901 Life Percentage Used:[2024-11-20 10:39:09.413063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x861690) 00:22:08.901 [2024-11-20 10:39:09.413074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.901 [2024-11-20 10:39:09.413085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3b80, cid 7, qid 0 00:22:08.901 [2024-11-20 10:39:09.413161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.901 [2024-11-20 10:39:09.413167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.901 [2024-11-20 10:39:09.413170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3b80) on tqpair=0x861690 00:22:08.901 [2024-11-20 10:39:09.413201] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:08.901 [2024-11-20 10:39:09.413211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3100) on tqpair=0x861690 00:22:08.901 [2024-11-20 10:39:09.413216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.901 [2024-11-20 10:39:09.413221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3280) on tqpair=0x861690 00:22:08.901 [2024-11-20 10:39:09.413225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.901 [2024-11-20 10:39:09.413229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3400) on tqpair=0x861690 00:22:08.901 [2024-11-20 10:39:09.413233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.901 [2024-11-20 10:39:09.413241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.901 [2024-11-20 10:39:09.413245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.901 [2024-11-20 10:39:09.413252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.901 [2024-11-20 10:39:09.413265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.901 [2024-11-20 10:39:09.413275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.901 [2024-11-20 10:39:09.413341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.901 [2024-11-20 10:39:09.413347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.901 [2024-11-20 10:39:09.413350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.901 [2024-11-20 10:39:09.413359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.901 [2024-11-20 10:39:09.413365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.901 [2024-11-20 10:39:09.413371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.901 [2024-11-20 10:39:09.413383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.901 [2024-11-20 10:39:09.413456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.901 [2024-11-20 10:39:09.413461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.901 [2024-11-20 10:39:09.413464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.413471] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:08.902 [2024-11-20 10:39:09.413476] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:08.902 [2024-11-20 10:39:09.413484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.413496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.413505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.413570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.413576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.413579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.413590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.413603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.413612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.413685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.413690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.413694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.413705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.413717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.413726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.413794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.413800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.413802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.413814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.413826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.413836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.413903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.413909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.413912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.413923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.413930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.413936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.413945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.414013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.414019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.414021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.414033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.414045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.414055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.414121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.414129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.414132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.414143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.414155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.414165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.414231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.414236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.414239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.414250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.414262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.414271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.414332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.414337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.414340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.414351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.414363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.414373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.414432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.414437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.414440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.414452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.414464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.414473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.902 [2024-11-20 10:39:09.414540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-20 10:39:09.414546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-20 10:39:09.414551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.902 [2024-11-20 10:39:09.414562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-20 10:39:09.414569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.902 [2024-11-20 10:39:09.414575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-20 10:39:09.414585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.414650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.414655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.414658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.414670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.414683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.414692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.414753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.414759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.414762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.414774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.414786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.414795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.414862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.414868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.414870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.414881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.414894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.414903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.414972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.414978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.414981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.414995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.414998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-20 10:39:09.415711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-20 10:39:09.415714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.903 [2024-11-20 10:39:09.415725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-20 10:39:09.415732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.903 [2024-11-20 10:39:09.415738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-20 10:39:09.415746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.903 [2024-11-20 10:39:09.415810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-20 10:39:09.415816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-20 10:39:09.415818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.415822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.904 [2024-11-20 10:39:09.415830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.415833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.415837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.904 [2024-11-20 10:39:09.415843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.904 [2024-11-20 10:39:09.415853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.904 [2024-11-20 10:39:09.415914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-20 10:39:09.415920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-20 10:39:09.415923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.415926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.904 [2024-11-20 10:39:09.415934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.415937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.415940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861690) 00:22:08.904 [2024-11-20 10:39:09.415946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.904 [2024-11-20 10:39:09.419965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c3580, cid 3, qid 0 00:22:08.904 [2024-11-20 10:39:09.420036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-20 10:39:09.420042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-20 10:39:09.420045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-20 10:39:09.420048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c3580) on tqpair=0x861690 00:22:08.904 [2024-11-20 10:39:09.420055] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:08.904 0% 00:22:08.904 Data Units Read: 0 00:22:08.904 Data Units Written: 0 00:22:08.904 Host Read Commands: 0 00:22:08.904 Host Write Commands: 0 00:22:08.904 Controller Busy Time: 0 minutes 00:22:08.904 Power Cycles: 0 00:22:08.904 Power On Hours: 0 hours 00:22:08.904 Unsafe Shutdowns: 0 00:22:08.904 Unrecoverable Media Errors: 0 00:22:08.904 Lifetime Error Log Entries: 0 00:22:08.904 Warning Temperature Time: 0 minutes 00:22:08.904 Critical Temperature Time: 0 minutes 00:22:08.904 00:22:08.904 Number of Queues 00:22:08.904 ================ 00:22:08.904 Number of I/O Submission Queues: 127 00:22:08.904 Number of I/O Completion Queues: 127 00:22:08.904 00:22:08.904 Active Namespaces 00:22:08.904 ================= 00:22:08.904 Namespace ID:1 00:22:08.904 Error Recovery Timeout: Unlimited 00:22:08.904 Command Set Identifier: NVM (00h) 00:22:08.904 Deallocate: Supported 00:22:08.904 Deallocated/Unwritten Error: Not Supported 00:22:08.904 Deallocated Read Value: Unknown 00:22:08.904 Deallocate in Write Zeroes: Not Supported 00:22:08.904 Deallocated Guard Field: 0xFFFF 00:22:08.904 Flush: Supported 00:22:08.904 Reservation: Supported 00:22:08.904 Namespace Sharing Capabilities: Multiple Controllers 00:22:08.904 Size (in LBAs): 131072 (0GiB) 00:22:08.904 Capacity (in LBAs): 131072 (0GiB) 00:22:08.904 Utilization (in LBAs): 131072 (0GiB) 00:22:08.904 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:08.904 EUI64: ABCDEF0123456789 00:22:08.904 UUID: c8f68e70-e3e7-4ddc-b5ae-f89903b1f465 00:22:08.904 Thin Provisioning: Not Supported 00:22:08.904 Per-NS Atomic Units: Yes 00:22:08.904 Atomic Boundary Size (Normal): 0 00:22:08.904 Atomic Boundary Size (PFail): 0 00:22:08.904 Atomic Boundary Offset: 0 00:22:08.904 Maximum Single Source Range Length: 65535 00:22:08.904 Maximum Copy Length: 65535 00:22:08.904 Maximum Source Range Count: 1 00:22:08.904 NGUID/EUI64 Never Reused: No 00:22:08.904 Namespace Write Protected: No 00:22:08.904 Number of LBA Formats: 1 00:22:08.904 Current LBA Format: LBA Format #00 00:22:08.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:08.904 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.904 rmmod nvme_tcp 00:22:08.904 rmmod nvme_fabrics 00:22:08.904 rmmod nvme_keyring 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3567187 ']' 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3567187 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3567187 ']' 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3567187 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3567187 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3567187' 00:22:08.904 killing process with pid 3567187 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3567187 00:22:08.904 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3567187 00:22:09.163 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.163 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.163 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.164 10:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.698 00:22:11.698 real 0m10.088s 00:22:11.698 user 0m8.642s 00:22:11.698 sys 0m4.904s 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.698 ************************************ 00:22:11.698 END TEST nvmf_identify 00:22:11.698 ************************************ 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.698 ************************************ 00:22:11.698 START TEST nvmf_perf 00:22:11.698 ************************************ 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:11.698 * Looking for test storage... 00:22:11.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:11.698 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.698 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.699 --rc genhtml_branch_coverage=1 00:22:11.699 --rc genhtml_function_coverage=1 00:22:11.699 --rc genhtml_legend=1 00:22:11.699 --rc geninfo_all_blocks=1 00:22:11.699 --rc geninfo_unexecuted_blocks=1 00:22:11.699 00:22:11.699 ' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.699 --rc genhtml_branch_coverage=1 00:22:11.699 --rc genhtml_function_coverage=1 00:22:11.699 --rc genhtml_legend=1 00:22:11.699 --rc geninfo_all_blocks=1 00:22:11.699 --rc geninfo_unexecuted_blocks=1 00:22:11.699 00:22:11.699 ' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.699 --rc genhtml_branch_coverage=1 00:22:11.699 --rc genhtml_function_coverage=1 00:22:11.699 --rc genhtml_legend=1 00:22:11.699 --rc geninfo_all_blocks=1 00:22:11.699 --rc geninfo_unexecuted_blocks=1 00:22:11.699 00:22:11.699 ' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.699 --rc genhtml_branch_coverage=1 00:22:11.699 --rc genhtml_function_coverage=1 00:22:11.699 --rc genhtml_legend=1 00:22:11.699 --rc geninfo_all_blocks=1 00:22:11.699 --rc geninfo_unexecuted_blocks=1 00:22:11.699 00:22:11.699 ' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.699 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.266 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.266 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.266 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.266 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.267 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.267 10:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:22:18.267 00:22:18.267 --- 10.0.0.2 ping statistics --- 00:22:18.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.267 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:22:18.267 00:22:18.267 --- 10.0.0.1 ping statistics --- 00:22:18.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.267 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3571340 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3571340 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3571340 ']' 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.267 [2024-11-20 10:39:18.134662] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:22:18.267 [2024-11-20 10:39:18.134714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.267 [2024-11-20 10:39:18.214305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.267 [2024-11-20 10:39:18.258274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.267 [2024-11-20 10:39:18.258310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.267 [2024-11-20 10:39:18.258317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.267 [2024-11-20 10:39:18.258323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.267 [2024-11-20 10:39:18.258328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.267 [2024-11-20 10:39:18.259897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.267 [2024-11-20 10:39:18.260009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.267 [2024-11-20 10:39:18.260046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.267 [2024-11-20 10:39:18.260047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.267 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.526 10:39:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.526 10:39:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:18.526 10:39:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:21.808 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.067 [2024-11-20 10:39:22.659694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.067 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.326 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:22.326 10:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.585 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:22.585 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:22.585 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.843 [2024-11-20 10:39:23.490713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.843 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:23.101 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:23.101 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:23.101 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:23.101 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:24.476 Initializing NVMe Controllers 00:22:24.476 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:24.476 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:24.476 Initialization complete. Launching workers. 00:22:24.476 ======================================================== 00:22:24.476 Latency(us) 00:22:24.476 Device Information : IOPS MiB/s Average min max 00:22:24.476 PCIE (0000:5e:00.0) NSID 1 from core 0: 97436.21 380.61 328.03 14.31 4401.07 00:22:24.476 ======================================================== 00:22:24.476 Total : 97436.21 380.61 328.03 14.31 4401.07 00:22:24.476 00:22:24.476 10:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:25.849 Initializing NVMe Controllers 00:22:25.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:25.849 Initialization complete. Launching workers. 00:22:25.849 ======================================================== 00:22:25.849 Latency(us) 00:22:25.849 Device Information : IOPS MiB/s Average min max 00:22:25.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 288.98 1.13 3567.61 115.85 46112.35 00:22:25.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.80 0.22 17731.60 6992.86 47901.41 00:22:25.849 ======================================================== 00:22:25.849 Total : 345.78 1.35 5894.26 115.85 47901.41 00:22:25.849 00:22:25.849 10:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:27.222 Initializing NVMe Controllers 00:22:27.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:27.222 Initialization complete. Launching workers. 00:22:27.222 ======================================================== 00:22:27.222 Latency(us) 00:22:27.222 Device Information : IOPS MiB/s Average min max 00:22:27.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10923.65 42.67 2939.38 495.20 9987.84 00:22:27.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3872.88 15.13 8318.08 5473.00 16396.52 00:22:27.222 ======================================================== 00:22:27.222 Total : 14796.53 57.80 4347.22 495.20 16396.52 00:22:27.222 00:22:27.222 10:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:27.222 10:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:27.222 10:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.754 Initializing NVMe Controllers 00:22:29.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.754 Controller IO queue size 128, less than required. 00:22:29.754 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.754 Controller IO queue size 128, less than required. 00:22:29.754 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.754 Initialization complete. Launching workers. 00:22:29.754 ======================================================== 00:22:29.754 Latency(us) 00:22:29.754 Device Information : IOPS MiB/s Average min max 00:22:29.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1762.73 440.68 73677.77 52955.22 129448.10 00:22:29.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.56 153.14 220439.23 71756.33 322990.08 00:22:29.754 ======================================================== 00:22:29.754 Total : 2375.29 593.82 111525.80 52955.22 322990.08 00:22:29.754 00:22:29.754 10:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:30.011 No valid NVMe controllers or AIO or URING devices found 00:22:30.011 Initializing NVMe Controllers 00:22:30.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.011 Controller IO queue size 128, less than required. 00:22:30.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.011 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:30.011 Controller IO queue size 128, less than required. 00:22:30.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.011 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:30.011 WARNING: Some requested NVMe devices were skipped 00:22:30.011 10:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:32.541 Initializing NVMe Controllers 00:22:32.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.541 Controller IO queue size 128, less than required. 00:22:32.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.541 Controller IO queue size 128, less than required. 00:22:32.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.541 Initialization complete. Launching workers. 00:22:32.541 00:22:32.541 ==================== 00:22:32.541 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:32.541 TCP transport: 00:22:32.541 polls: 11096 00:22:32.541 idle_polls: 7517 00:22:32.541 sock_completions: 3579 00:22:32.541 nvme_completions: 6269 00:22:32.541 submitted_requests: 9356 00:22:32.541 queued_requests: 1 00:22:32.541 00:22:32.541 ==================== 00:22:32.541 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:32.541 TCP transport: 00:22:32.541 polls: 11282 00:22:32.541 idle_polls: 7580 00:22:32.541 sock_completions: 3702 00:22:32.541 nvme_completions: 6355 00:22:32.541 submitted_requests: 9480 00:22:32.541 queued_requests: 1 00:22:32.541 ======================================================== 00:22:32.542 Latency(us) 00:22:32.542 Device Information : IOPS MiB/s Average min max 00:22:32.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1565.21 391.30 83249.51 52893.76 133297.94 00:22:32.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1586.69 396.67 81506.94 46425.51 119176.36 00:22:32.542 ======================================================== 00:22:32.542 Total : 3151.90 787.98 82372.29 46425.51 133297.94 00:22:32.542 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:32.542 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.801 rmmod nvme_tcp 00:22:32.801 rmmod nvme_fabrics 00:22:32.801 rmmod nvme_keyring 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3571340 ']' 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3571340 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3571340 ']' 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3571340 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3571340 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3571340' 00:22:32.801 killing process with pid 3571340 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3571340 00:22:32.801 10:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3571340 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.177 10:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.714 00:22:36.714 real 0m25.023s 00:22:36.714 user 1m6.154s 00:22:36.714 sys 0m8.394s 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.714 ************************************ 00:22:36.714 END TEST nvmf_perf 00:22:36.714 ************************************ 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.714 ************************************ 00:22:36.714 START TEST nvmf_fio_host 00:22:36.714 ************************************ 00:22:36.714 10:39:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:36.714 * Looking for test storage... 00:22:36.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.714 --rc genhtml_branch_coverage=1 00:22:36.714 --rc genhtml_function_coverage=1 00:22:36.714 --rc genhtml_legend=1 00:22:36.714 --rc geninfo_all_blocks=1 00:22:36.714 --rc geninfo_unexecuted_blocks=1 00:22:36.714 00:22:36.714 ' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.714 --rc genhtml_branch_coverage=1 00:22:36.714 --rc genhtml_function_coverage=1 00:22:36.714 --rc genhtml_legend=1 00:22:36.714 --rc geninfo_all_blocks=1 00:22:36.714 --rc geninfo_unexecuted_blocks=1 00:22:36.714 00:22:36.714 ' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.714 --rc genhtml_branch_coverage=1 00:22:36.714 --rc genhtml_function_coverage=1 00:22:36.714 --rc genhtml_legend=1 00:22:36.714 --rc geninfo_all_blocks=1 00:22:36.714 --rc geninfo_unexecuted_blocks=1 00:22:36.714 00:22:36.714 ' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.714 --rc genhtml_branch_coverage=1 00:22:36.714 --rc genhtml_function_coverage=1 00:22:36.714 --rc genhtml_legend=1 00:22:36.714 --rc geninfo_all_blocks=1 00:22:36.714 --rc geninfo_unexecuted_blocks=1 00:22:36.714 00:22:36.714 ' 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.714 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.715 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.279 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.280 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.280 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.280 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:22:43.280 00:22:43.280 --- 10.0.0.2 ping statistics --- 00:22:43.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.280 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:22:43.280 00:22:43.280 --- 10.0.0.1 ping statistics --- 00:22:43.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.280 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3577452 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3577452 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3577452 ']' 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.280 [2024-11-20 10:39:43.184461] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:22:43.280 [2024-11-20 10:39:43.184513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.280 [2024-11-20 10:39:43.267884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.280 [2024-11-20 10:39:43.310470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.280 [2024-11-20 10:39:43.310515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.280 [2024-11-20 10:39:43.310527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.280 [2024-11-20 10:39:43.310534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.280 [2024-11-20 10:39:43.310540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.280 [2024-11-20 10:39:43.312201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.280 [2024-11-20 10:39:43.312313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.280 [2024-11-20 10:39:43.312422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.280 [2024-11-20 10:39:43.312423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:43.280 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:43.281 [2024-11-20 10:39:43.586621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.281 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:43.281 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.281 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.281 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:43.281 Malloc1 00:22:43.281 10:39:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.538 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:43.795 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.795 [2024-11-20 10:39:44.444502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.795 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:44.052 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:44.053 10:39:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.310 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:44.310 fio-3.35 00:22:44.310 Starting 1 thread 00:22:46.835 [2024-11-20 10:39:47.343888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec13d0 is same with the state(6) to be set 00:22:46.835 [2024-11-20 10:39:47.343958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec13d0 is same with the state(6) to be set 00:22:46.835 [2024-11-20 10:39:47.343968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec13d0 is same with the state(6) to be set 00:22:46.835 [2024-11-20 10:39:47.343975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec13d0 is same with the state(6) to be set 00:22:46.835 00:22:46.835 test: (groupid=0, jobs=1): err= 0: pid=3578041: Wed Nov 20 10:39:47 2024 00:22:46.835 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec) 00:22:46.835 slat (nsec): min=1567, max=253953, avg=1741.33, stdev=2281.34 00:22:46.835 clat (usec): min=3200, max=10348, avg=6086.44, stdev=477.90 00:22:46.835 lat (usec): min=3230, max=10349, avg=6088.18, stdev=477.87 00:22:46.835 clat percentiles (usec): 00:22:46.835 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:22:46.835 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:22:46.835 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:46.835 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9241], 00:22:46.835 | 99.99th=[10290] 00:22:46.835 bw ( KiB/s): min=45776, max=47096, per=99.94%, avg=46546.00, stdev=556.77, samples=4 00:22:46.835 iops : min=11444, max=11774, avg=11636.50, stdev=139.19, samples=4 00:22:46.835 write: IOPS=11.6k, BW=45.2MiB/s (47.3MB/s)(90.5MiB/2005msec); 0 zone resets 00:22:46.835 slat (nsec): min=1610, max=226594, avg=1791.45, stdev=1668.07 00:22:46.835 clat (usec): min=2454, max=8741, avg=4923.19, stdev=388.78 00:22:46.835 lat (usec): min=2469, max=8742, avg=4924.98, stdev=388.80 00:22:46.835 clat percentiles (usec): 00:22:46.835 | 1.00th=[ 4015], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:46.835 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:46.835 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5473], 00:22:46.835 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7308], 99.95th=[ 8094], 00:22:46.835 | 99.99th=[ 8717] 00:22:46.835 bw ( KiB/s): min=45960, max=46584, per=100.00%, avg=46244.00, stdev=256.87, samples=4 00:22:46.835 iops : min=11490, max=11646, avg=11561.00, stdev=64.22, samples=4 00:22:46.835 lat (msec) : 4=0.51%, 10=99.47%, 20=0.02% 00:22:46.835 cpu : usr=74.35%, sys=24.65%, ctx=113, majf=0, minf=3 00:22:46.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:46.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:46.835 issued rwts: total=23345,23178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:46.835 00:22:46.835 Run status group 0 (all jobs): 00:22:46.835 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:22:46.835 WRITE: bw=45.2MiB/s (47.3MB/s), 45.2MiB/s-45.2MiB/s (47.3MB/s-47.3MB/s), io=90.5MiB (94.9MB), run=2005-2005msec 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:46.835 10:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.093 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:47.093 fio-3.35 00:22:47.093 Starting 1 thread 00:22:49.619 00:22:49.619 test: (groupid=0, jobs=1): err= 0: pid=3578615: Wed Nov 20 10:39:50 2024 00:22:49.619 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(341MiB/2006msec) 00:22:49.619 slat (usec): min=2, max=101, avg= 2.83, stdev= 1.32 00:22:49.619 clat (usec): min=1873, max=13586, avg=6740.50, stdev=1495.92 00:22:49.619 lat (usec): min=1876, max=13601, avg=6743.33, stdev=1496.08 00:22:49.619 clat percentiles (usec): 00:22:49.619 | 1.00th=[ 3752], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:22:49.619 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7242], 00:22:49.619 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9110], 00:22:49.619 | 99.00th=[10683], 99.50th=[11076], 99.90th=[11863], 99.95th=[13042], 00:22:49.619 | 99.99th=[13566] 00:22:49.619 bw ( KiB/s): min=83744, max=98112, per=50.64%, avg=88192.00, stdev=6716.29, samples=4 00:22:49.619 iops : min= 5234, max= 6132, avg=5512.00, stdev=419.77, samples=4 00:22:49.619 write: IOPS=6405, BW=100MiB/s (105MB/s)(180MiB/1803msec); 0 zone resets 00:22:49.619 slat (usec): min=30, max=389, avg=31.71, stdev= 7.60 00:22:49.619 clat (usec): min=4442, max=15928, avg=8847.98, stdev=1494.06 00:22:49.619 lat (usec): min=4472, max=15959, avg=8879.69, stdev=1495.73 00:22:49.619 clat percentiles (usec): 00:22:49.619 | 1.00th=[ 6194], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7570], 00:22:49.619 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:22:49.619 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[11600], 00:22:49.619 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14746], 99.95th=[15008], 00:22:49.619 | 99.99th=[15926] 00:22:49.619 bw ( KiB/s): min=86720, max=102400, per=89.90%, avg=92136.00, stdev=7151.76, samples=4 00:22:49.619 iops : min= 5420, max= 6400, avg=5758.50, stdev=446.99, samples=4 00:22:49.619 lat (msec) : 2=0.01%, 4=1.41%, 10=89.66%, 20=8.91% 00:22:49.619 cpu : usr=86.74%, sys=12.51%, ctx=58, majf=0, minf=3 00:22:49.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:49.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.619 issued rwts: total=21836,11549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.619 00:22:49.619 Run status group 0 (all jobs): 00:22:49.619 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=341MiB (358MB), run=2006-2006msec 00:22:49.619 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=180MiB (189MB), run=1803-1803msec 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.619 rmmod nvme_tcp 00:22:49.619 rmmod nvme_fabrics 00:22:49.619 rmmod nvme_keyring 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.619 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3577452 ']' 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3577452 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3577452 ']' 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3577452 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.620 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3577452 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3577452' 00:22:49.878 killing process with pid 3577452 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3577452 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3577452 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:49.878 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.879 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.879 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.879 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.879 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.879 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.879 10:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.412 00:22:52.412 real 0m15.658s 00:22:52.412 user 0m45.900s 00:22:52.412 sys 0m6.474s 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.412 ************************************ 00:22:52.412 END TEST nvmf_fio_host 00:22:52.412 ************************************ 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.412 ************************************ 00:22:52.412 START TEST nvmf_failover 00:22:52.412 ************************************ 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:52.412 * Looking for test storage... 00:22:52.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:52.412 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:52.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.413 --rc genhtml_branch_coverage=1 00:22:52.413 --rc genhtml_function_coverage=1 00:22:52.413 --rc genhtml_legend=1 00:22:52.413 --rc geninfo_all_blocks=1 00:22:52.413 --rc geninfo_unexecuted_blocks=1 00:22:52.413 00:22:52.413 ' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:52.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.413 --rc genhtml_branch_coverage=1 00:22:52.413 --rc genhtml_function_coverage=1 00:22:52.413 --rc genhtml_legend=1 00:22:52.413 --rc geninfo_all_blocks=1 00:22:52.413 --rc geninfo_unexecuted_blocks=1 00:22:52.413 00:22:52.413 ' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:52.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.413 --rc genhtml_branch_coverage=1 00:22:52.413 --rc genhtml_function_coverage=1 00:22:52.413 --rc genhtml_legend=1 00:22:52.413 --rc geninfo_all_blocks=1 00:22:52.413 --rc geninfo_unexecuted_blocks=1 00:22:52.413 00:22:52.413 ' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:52.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.413 --rc genhtml_branch_coverage=1 00:22:52.413 --rc genhtml_function_coverage=1 00:22:52.413 --rc genhtml_legend=1 00:22:52.413 --rc geninfo_all_blocks=1 00:22:52.413 --rc geninfo_unexecuted_blocks=1 00:22:52.413 00:22:52.413 ' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.413 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.981 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.981 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.981 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:22:58.982 00:22:58.982 --- 10.0.0.2 ping statistics --- 00:22:58.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.982 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:58.982 00:22:58.982 --- 10.0.0.1 ping statistics --- 00:22:58.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.982 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3582375 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3582375 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3582375 ']' 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.982 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:58.982 [2024-11-20 10:39:58.848301] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:22:58.982 [2024-11-20 10:39:58.848344] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.982 [2024-11-20 10:39:58.927808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.982 [2024-11-20 10:39:58.969624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.982 [2024-11-20 10:39:58.969659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.982 [2024-11-20 10:39:58.969666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.982 [2024-11-20 10:39:58.969673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.982 [2024-11-20 10:39:58.969678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.982 [2024-11-20 10:39:58.971175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.982 [2024-11-20 10:39:58.971202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.982 [2024-11-20 10:39:58.971203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:58.982 [2024-11-20 10:39:59.292692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:58.982 Malloc0 00:22:58.982 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.240 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.240 10:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.498 [2024-11-20 10:40:00.115774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.498 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.755 [2024-11-20 10:40:00.312288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:00.013 [2024-11-20 10:40:00.516967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3582757 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3582757 /var/tmp/bdevperf.sock 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3582757 ']' 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.013 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.272 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.272 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:00.272 10:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:00.529 NVMe0n1 00:23:00.530 10:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:01.095 00:23:01.095 10:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.095 10:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3582860 00:23:01.095 10:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:02.027 10:40:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.287 [2024-11-20 10:40:02.767200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.287 [2024-11-20 10:40:02.767737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 [2024-11-20 10:40:02.767858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a02d0 is same with the state(6) to be set 00:23:02.288 10:40:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:05.568 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:05.568 00:23:05.568 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.826 [2024-11-20 10:40:06.318903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 [2024-11-20 10:40:06.318999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1060 is same with the state(6) to be set 00:23:05.826 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:09.252 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.252 [2024-11-20 10:40:09.533861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.252 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:10.185 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:10.185 [2024-11-20 10:40:10.751883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 [2024-11-20 10:40:10.751923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 [2024-11-20 10:40:10.751931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 [2024-11-20 10:40:10.751937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 [2024-11-20 10:40:10.751943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 [2024-11-20 10:40:10.751958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 [2024-11-20 10:40:10.751964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1e30 is same with the state(6) to be set 00:23:10.185 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3582860 00:23:16.750 { 00:23:16.750 "results": [ 00:23:16.750 { 00:23:16.750 "job": "NVMe0n1", 00:23:16.750 "core_mask": "0x1", 00:23:16.750 "workload": "verify", 00:23:16.750 "status": "finished", 00:23:16.750 "verify_range": { 00:23:16.750 "start": 0, 00:23:16.750 "length": 16384 00:23:16.750 }, 00:23:16.750 "queue_depth": 128, 00:23:16.750 "io_size": 4096, 00:23:16.750 "runtime": 15.006912, 00:23:16.750 "iops": 10869.324748489229, 00:23:16.750 "mibps": 42.45829979878605, 00:23:16.750 "io_failed": 17261, 00:23:16.750 "io_timeout": 0, 00:23:16.750 "avg_latency_us": 10626.912790403041, 00:23:16.750 "min_latency_us": 427.4086956521739, 00:23:16.750 "max_latency_us": 21541.398260869566 00:23:16.750 } 00:23:16.750 ], 00:23:16.750 "core_count": 1 00:23:16.750 } 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3582757 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3582757 ']' 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3582757 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3582757 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3582757' 00:23:16.750 killing process with pid 3582757 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3582757 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3582757 00:23:16.750 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.750 [2024-11-20 10:40:00.576341] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:23:16.750 [2024-11-20 10:40:00.576395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582757 ] 00:23:16.750 [2024-11-20 10:40:00.652198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.750 [2024-11-20 10:40:00.693861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.750 Running I/O for 15 seconds... 00:23:16.750 11064.00 IOPS, 43.22 MiB/s [2024-11-20T09:40:17.481Z] [2024-11-20 10:40:02.769009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.750 [2024-11-20 10:40:02.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.750 [2024-11-20 10:40:02.769373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.751 [2024-11-20 10:40:02.769944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.751 [2024-11-20 10:40:02.769958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.769966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.769974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.769983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.769991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.752 [2024-11-20 10:40:02.770397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.752 [2024-11-20 10:40:02.770411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.752 [2024-11-20 10:40:02.770544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.752 [2024-11-20 10:40:02.770551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.753 [2024-11-20 10:40:02.770875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.753 [2024-11-20 10:40:02.770901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:23:16.753 [2024-11-20 10:40:02.770908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.753 [2024-11-20 10:40:02.770924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.753 [2024-11-20 10:40:02.770929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:23:16.753 [2024-11-20 10:40:02.770936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.753 [2024-11-20 10:40:02.770954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.753 [2024-11-20 10:40:02.770960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:23:16.753 [2024-11-20 10:40:02.770966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.753 [2024-11-20 10:40:02.770978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.753 [2024-11-20 10:40:02.770983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:23:16.753 [2024-11-20 10:40:02.770989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.770996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.753 [2024-11-20 10:40:02.771000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.753 [2024-11-20 10:40:02.771006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:23:16.753 [2024-11-20 10:40:02.771012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.771056] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:16.753 [2024-11-20 10:40:02.771078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.753 [2024-11-20 10:40:02.771085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.771093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.753 [2024-11-20 10:40:02.771099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.771106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.753 [2024-11-20 10:40:02.771112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.771120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.753 [2024-11-20 10:40:02.771126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:02.771133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:16.753 [2024-11-20 10:40:02.771170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2363340 (9): Bad file descriptor 00:23:16.753 [2024-11-20 10:40:02.773991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:16.753 [2024-11-20 10:40:02.843529] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:16.753 10697.50 IOPS, 41.79 MiB/s [2024-11-20T09:40:17.484Z] 10839.33 IOPS, 42.34 MiB/s [2024-11-20T09:40:17.484Z] 10949.00 IOPS, 42.77 MiB/s [2024-11-20T09:40:17.484Z] [2024-11-20 10:40:06.320451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.753 [2024-11-20 10:40:06.320486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.753 [2024-11-20 10:40:06.320501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.320989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.320998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.321013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.321020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.754 [2024-11-20 10:40:06.321028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.754 [2024-11-20 10:40:06.321035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.755 [2024-11-20 10:40:06.321469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.755 [2024-11-20 10:40:06.321621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.755 [2024-11-20 10:40:06.321629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.756 [2024-11-20 10:40:06.321937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.321974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44328 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.321981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.321990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.321995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44336 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44344 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44352 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44360 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44368 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44376 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44384 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44392 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44400 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.756 [2024-11-20 10:40:06.322212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.756 [2024-11-20 10:40:06.322218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44408 len:8 PRP1 0x0 PRP2 0x0 00:23:16.756 [2024-11-20 10:40:06.322224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.756 [2024-11-20 10:40:06.322230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44416 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44424 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44432 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44440 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44448 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44456 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44464 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44472 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44480 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44528 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44536 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44544 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44552 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.322668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44560 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.322674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.322681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.757 [2024-11-20 10:40:06.322686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.757 [2024-11-20 10:40:06.333470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44568 len:8 PRP1 0x0 PRP2 0x0 00:23:16.757 [2024-11-20 10:40:06.333485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.333536] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:16.757 [2024-11-20 10:40:06.333563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.757 [2024-11-20 10:40:06.333574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.333585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.757 [2024-11-20 10:40:06.333595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.333604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.757 [2024-11-20 10:40:06.333612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.333622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.757 [2024-11-20 10:40:06.333634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.757 [2024-11-20 10:40:06.333643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:16.757 [2024-11-20 10:40:06.333670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2363340 (9): Bad file descriptor 00:23:16.758 [2024-11-20 10:40:06.337526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:16.758 [2024-11-20 10:40:06.447187] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:16.758 10699.80 IOPS, 41.80 MiB/s [2024-11-20T09:40:17.489Z] 10770.67 IOPS, 42.07 MiB/s [2024-11-20T09:40:17.489Z] 10809.00 IOPS, 42.22 MiB/s [2024-11-20T09:40:17.489Z] 10862.12 IOPS, 42.43 MiB/s [2024-11-20T09:40:17.489Z] 10908.89 IOPS, 42.61 MiB/s [2024-11-20T09:40:17.489Z] [2024-11-20 10:40:10.753951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.753986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.758 [2024-11-20 10:40:10.754133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.758 [2024-11-20 10:40:10.754561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.758 [2024-11-20 10:40:10.754569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.759 [2024-11-20 10:40:10.754855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.754885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76104 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.754891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.754906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.754912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76112 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.754920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.754931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.754937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76120 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.754943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.754960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.754966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.754973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.754981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.754986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.754991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76136 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.754997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.755004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.755009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.755015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.755021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.755027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.755033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.755038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.755044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.755051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.755056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.759 [2024-11-20 10:40:10.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76160 len:8 PRP1 0x0 PRP2 0x0 00:23:16.759 [2024-11-20 10:40:10.755067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.759 [2024-11-20 10:40:10.755074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.759 [2024-11-20 10:40:10.755079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76176 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76192 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76200 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76208 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76216 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76224 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76240 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76248 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76256 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76264 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76280 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76288 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76304 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76312 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76320 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76328 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76336 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.760 [2024-11-20 10:40:10.755627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.760 [2024-11-20 10:40:10.755633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.760 [2024-11-20 10:40:10.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:23:16.760 [2024-11-20 10:40:10.755645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76448 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76456 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.755979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.755984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76464 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.755991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.755998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.756003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.756008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76472 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.756021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.756026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.756031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76480 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.756046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.756051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.756059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76488 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.756065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.756072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.766117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.766132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76496 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.766142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.766151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.766158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.766166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76504 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.766174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.766184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.766191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.766198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76512 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.766207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.766215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.766222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.766229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76520 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.766238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.766247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.766253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.766260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76528 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.761 [2024-11-20 10:40:10.766278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.761 [2024-11-20 10:40:10.766285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.761 [2024-11-20 10:40:10.766292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76536 len:8 PRP1 0x0 PRP2 0x0 00:23:16.761 [2024-11-20 10:40:10.766300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76544 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76552 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76560 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76568 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76576 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76584 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76592 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76600 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76608 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76616 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76624 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76632 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76640 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76648 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:16.762 [2024-11-20 10:40:10.766763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:16.762 [2024-11-20 10:40:10.766770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76656 len:8 PRP1 0x0 PRP2 0x0 00:23:16.762 [2024-11-20 10:40:10.766778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766828] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:16.762 [2024-11-20 10:40:10.766857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.762 [2024-11-20 10:40:10.766867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.762 [2024-11-20 10:40:10.766890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.762 [2024-11-20 10:40:10.766908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.762 [2024-11-20 10:40:10.766927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.762 [2024-11-20 10:40:10.766936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:16.762 [2024-11-20 10:40:10.766981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2363340 (9): Bad file descriptor 00:23:16.762 [2024-11-20 10:40:10.770818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:16.762 [2024-11-20 10:40:10.953434] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:16.762 10701.00 IOPS, 41.80 MiB/s [2024-11-20T09:40:17.493Z] 10754.45 IOPS, 42.01 MiB/s [2024-11-20T09:40:17.493Z] 10798.42 IOPS, 42.18 MiB/s [2024-11-20T09:40:17.493Z] 10820.23 IOPS, 42.27 MiB/s [2024-11-20T09:40:17.493Z] 10851.43 IOPS, 42.39 MiB/s [2024-11-20T09:40:17.493Z] 10869.20 IOPS, 42.46 MiB/s 00:23:16.762 Latency(us) 00:23:16.762 [2024-11-20T09:40:17.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.762 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:16.762 Verification LBA range: start 0x0 length 0x4000 00:23:16.762 NVMe0n1 : 15.01 10869.32 42.46 1150.20 0.00 10626.91 427.41 21541.40 00:23:16.762 [2024-11-20T09:40:17.493Z] =================================================================================================================== 00:23:16.762 [2024-11-20T09:40:17.493Z] Total : 10869.32 42.46 1150.20 0.00 10626.91 427.41 21541.40 00:23:16.762 Received shutdown signal, test time was about 15.000000 seconds 00:23:16.762 00:23:16.762 Latency(us) 00:23:16.762 [2024-11-20T09:40:17.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.762 [2024-11-20T09:40:17.493Z] =================================================================================================================== 00:23:16.762 [2024-11-20T09:40:17.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.762 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:16.762 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3585387 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3585387 /var/tmp/bdevperf.sock 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3585387 ']' 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.763 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 10:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.763 10:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:16.763 10:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.763 [2024-11-20 10:40:17.379912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.763 10:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:17.020 [2024-11-20 10:40:17.592585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:17.020 10:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:17.584 NVMe0n1 00:23:17.584 10:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:17.841 00:23:17.842 10:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:18.099 00:23:18.099 10:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.099 10:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:18.356 10:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.613 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:21.889 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.889 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:21.889 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3586311 00:23:21.889 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.889 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3586311 00:23:22.823 { 00:23:22.823 "results": [ 00:23:22.823 { 00:23:22.823 "job": "NVMe0n1", 00:23:22.823 "core_mask": "0x1", 00:23:22.823 "workload": "verify", 00:23:22.823 "status": "finished", 00:23:22.823 "verify_range": { 00:23:22.823 "start": 0, 00:23:22.823 "length": 16384 00:23:22.823 }, 00:23:22.823 "queue_depth": 128, 00:23:22.823 "io_size": 4096, 00:23:22.823 "runtime": 1.007049, 00:23:22.823 "iops": 11078.904800064347, 00:23:22.823 "mibps": 43.276971875251355, 00:23:22.823 "io_failed": 0, 00:23:22.823 "io_timeout": 0, 00:23:22.823 "avg_latency_us": 11510.421620897, 00:23:22.823 "min_latency_us": 2037.3147826086956, 00:23:22.823 "max_latency_us": 9573.954782608696 00:23:22.823 } 00:23:22.823 ], 00:23:22.823 "core_count": 1 00:23:22.823 } 00:23:22.823 10:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:22.823 [2024-11-20 10:40:16.987754] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:23:22.823 [2024-11-20 10:40:16.987806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585387 ] 00:23:22.823 [2024-11-20 10:40:17.063242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.823 [2024-11-20 10:40:17.101124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.823 [2024-11-20 10:40:19.122295] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:22.823 [2024-11-20 10:40:19.122338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.823 [2024-11-20 10:40:19.122349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.823 [2024-11-20 10:40:19.122358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.823 [2024-11-20 10:40:19.122365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.823 [2024-11-20 10:40:19.122372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.823 [2024-11-20 10:40:19.122379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.823 [2024-11-20 10:40:19.122386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.823 [2024-11-20 10:40:19.122393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.823 [2024-11-20 10:40:19.122400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:22.823 [2024-11-20 10:40:19.122425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:22.823 [2024-11-20 10:40:19.122439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22dc340 (9): Bad file descriptor 00:23:22.823 [2024-11-20 10:40:19.173487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:22.823 Running I/O for 1 seconds... 00:23:22.823 11029.00 IOPS, 43.08 MiB/s 00:23:22.823 Latency(us) 00:23:22.823 [2024-11-20T09:40:23.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.823 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:22.823 Verification LBA range: start 0x0 length 0x4000 00:23:22.823 NVMe0n1 : 1.01 11078.90 43.28 0.00 0.00 11510.42 2037.31 9573.95 00:23:22.823 [2024-11-20T09:40:23.554Z] =================================================================================================================== 00:23:22.823 [2024-11-20T09:40:23.554Z] Total : 11078.90 43.28 0.00 0.00 11510.42 2037.31 9573.95 00:23:22.823 10:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.823 10:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:23.081 10:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.340 10:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.340 10:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:23.598 10:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.598 10:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3585387 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3585387 ']' 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3585387 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3585387 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3585387' 00:23:26.881 killing process with pid 3585387 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3585387 00:23:26.881 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3585387 00:23:27.140 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:27.140 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.398 rmmod nvme_tcp 00:23:27.398 rmmod nvme_fabrics 00:23:27.398 rmmod nvme_keyring 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3582375 ']' 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3582375 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3582375 ']' 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3582375 00:23:27.398 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:27.399 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.399 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3582375 00:23:27.399 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.399 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.399 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3582375' 00:23:27.399 killing process with pid 3582375 00:23:27.399 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3582375 00:23:27.399 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3582375 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.657 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.658 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.658 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.658 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.658 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.560 10:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.560 00:23:29.560 real 0m37.564s 00:23:29.560 user 1m59.340s 00:23:29.560 sys 0m7.829s 00:23:29.560 10:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.560 10:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:29.560 ************************************ 00:23:29.560 END TEST nvmf_failover 00:23:29.560 ************************************ 00:23:29.819 10:40:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.820 ************************************ 00:23:29.820 START TEST nvmf_host_discovery 00:23:29.820 ************************************ 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:29.820 * Looking for test storage... 00:23:29.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.820 --rc genhtml_branch_coverage=1 00:23:29.820 --rc genhtml_function_coverage=1 00:23:29.820 --rc genhtml_legend=1 00:23:29.820 --rc geninfo_all_blocks=1 00:23:29.820 --rc geninfo_unexecuted_blocks=1 00:23:29.820 00:23:29.820 ' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.820 --rc genhtml_branch_coverage=1 00:23:29.820 --rc genhtml_function_coverage=1 00:23:29.820 --rc genhtml_legend=1 00:23:29.820 --rc geninfo_all_blocks=1 00:23:29.820 --rc geninfo_unexecuted_blocks=1 00:23:29.820 00:23:29.820 ' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.820 --rc genhtml_branch_coverage=1 00:23:29.820 --rc genhtml_function_coverage=1 00:23:29.820 --rc genhtml_legend=1 00:23:29.820 --rc geninfo_all_blocks=1 00:23:29.820 --rc geninfo_unexecuted_blocks=1 00:23:29.820 00:23:29.820 ' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.820 --rc genhtml_branch_coverage=1 00:23:29.820 --rc genhtml_function_coverage=1 00:23:29.820 --rc genhtml_legend=1 00:23:29.820 --rc geninfo_all_blocks=1 00:23:29.820 --rc geninfo_unexecuted_blocks=1 00:23:29.820 00:23:29.820 ' 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.820 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.079 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.080 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.646 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:36.647 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:36.647 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:36.647 Found net devices under 0000:86:00.0: cvl_0_0 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:36.647 Found net devices under 0000:86:00.1: cvl_0_1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:23:36.647 00:23:36.647 --- 10.0.0.2 ping statistics --- 00:23:36.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.647 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:23:36.647 00:23:36.647 --- 10.0.0.1 ping statistics --- 00:23:36.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.647 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3590752 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3590752 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3590752 ']' 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.647 [2024-11-20 10:40:36.566407] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:23:36.647 [2024-11-20 10:40:36.566461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.647 [2024-11-20 10:40:36.645166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.647 [2024-11-20 10:40:36.686677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.647 [2024-11-20 10:40:36.686712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.647 [2024-11-20 10:40:36.686720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.647 [2024-11-20 10:40:36.686725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.647 [2024-11-20 10:40:36.686730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.647 [2024-11-20 10:40:36.687283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.647 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 [2024-11-20 10:40:36.817600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 [2024-11-20 10:40:36.829783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 null0 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 null1 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3590783 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3590783 /tmp/host.sock 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3590783 ']' 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:36.648 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.648 10:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 [2024-11-20 10:40:36.906840] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:23:36.648 [2024-11-20 10:40:36.906884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590783 ] 00:23:36.648 [2024-11-20 10:40:36.979321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.648 [2024-11-20 10:40:37.021820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.648 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.649 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.649 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 [2024-11-20 10:40:37.443334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.907 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.908 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.166 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:37.166 10:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:37.733 [2024-11-20 10:40:38.196450] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:37.733 [2024-11-20 10:40:38.196470] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:37.733 [2024-11-20 10:40:38.196481] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.733 [2024-11-20 10:40:38.283744] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:37.733 [2024-11-20 10:40:38.344369] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:37.733 [2024-11-20 10:40:38.345145] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2482dd0:1 started. 00:23:37.733 [2024-11-20 10:40:38.346538] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.733 [2024-11-20 10:40:38.346554] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.733 [2024-11-20 10:40:38.354324] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2482dd0 was disconnected and freed. delete nvme_qpair. 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.991 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.249 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.250 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.508 [2024-11-20 10:40:39.041625] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24831a0:1 started. 00:23:38.508 [2024-11-20 10:40:39.045943] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24831a0 was disconnected and freed. delete nvme_qpair. 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.508 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.509 [2024-11-20 10:40:39.127974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:38.509 [2024-11-20 10:40:39.128955] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:38.509 [2024-11-20 10:40:39.128974] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.509 [2024-11-20 10:40:39.215588] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.509 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.766 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.766 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:38.766 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:38.766 [2024-11-20 10:40:39.281274] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:38.766 [2024-11-20 10:40:39.281308] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:38.766 [2024-11-20 10:40:39.281317] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.766 [2024-11-20 10:40:39.281322] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.702 [2024-11-20 10:40:40.372413] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:39.702 [2024-11-20 10:40:40.372439] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.702 [2024-11-20 10:40:40.375987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.702 [2024-11-20 10:40:40.376007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.702 [2024-11-20 10:40:40.376016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.702 [2024-11-20 10:40:40.376023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.702 [2024-11-20 10:40:40.376031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.702 [2024-11-20 10:40:40.376037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.702 [2024-11-20 10:40:40.376044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.702 [2024-11-20 10:40:40.376052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.702 [2024-11-20 10:40:40.376059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.702 [2024-11-20 10:40:40.385998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.702 [2024-11-20 10:40:40.396033] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.702 [2024-11-20 10:40:40.396046] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.702 [2024-11-20 10:40:40.396051] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.702 [2024-11-20 10:40:40.396057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.702 [2024-11-20 10:40:40.396075] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.702 [2024-11-20 10:40:40.396308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.702 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.702 [2024-11-20 10:40:40.396324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.702 [2024-11-20 10:40:40.396334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.702 [2024-11-20 10:40:40.396346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.702 [2024-11-20 10:40:40.396356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.703 [2024-11-20 10:40:40.396363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.703 [2024-11-20 10:40:40.396371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.703 [2024-11-20 10:40:40.396377] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.703 [2024-11-20 10:40:40.396382] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.703 [2024-11-20 10:40:40.396387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.703 [2024-11-20 10:40:40.406105] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.703 [2024-11-20 10:40:40.406116] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.703 [2024-11-20 10:40:40.406120] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.703 [2024-11-20 10:40:40.406125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.703 [2024-11-20 10:40:40.406138] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.703 [2024-11-20 10:40:40.406245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.703 [2024-11-20 10:40:40.406257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.703 [2024-11-20 10:40:40.406265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.703 [2024-11-20 10:40:40.406275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.703 [2024-11-20 10:40:40.406290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.703 [2024-11-20 10:40:40.406302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.703 [2024-11-20 10:40:40.406309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.703 [2024-11-20 10:40:40.406315] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.703 [2024-11-20 10:40:40.406319] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.703 [2024-11-20 10:40:40.406323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.703 [2024-11-20 10:40:40.416169] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.703 [2024-11-20 10:40:40.416183] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.703 [2024-11-20 10:40:40.416187] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.703 [2024-11-20 10:40:40.416192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.703 [2024-11-20 10:40:40.416205] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.703 [2024-11-20 10:40:40.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.703 [2024-11-20 10:40:40.416377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.703 [2024-11-20 10:40:40.416385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.703 [2024-11-20 10:40:40.416396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.703 [2024-11-20 10:40:40.416406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.703 [2024-11-20 10:40:40.416412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.703 [2024-11-20 10:40:40.416418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.703 [2024-11-20 10:40:40.416423] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.703 [2024-11-20 10:40:40.416428] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.703 [2024-11-20 10:40:40.416432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.703 [2024-11-20 10:40:40.426237] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.703 [2024-11-20 10:40:40.426250] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.703 [2024-11-20 10:40:40.426254] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.703 [2024-11-20 10:40:40.426258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.703 [2024-11-20 10:40:40.426273] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.703 [2024-11-20 10:40:40.426442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.703 [2024-11-20 10:40:40.426454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.703 [2024-11-20 10:40:40.426461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.703 [2024-11-20 10:40:40.426471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.703 [2024-11-20 10:40:40.426490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.703 [2024-11-20 10:40:40.426497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.703 [2024-11-20 10:40:40.426504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.703 [2024-11-20 10:40:40.426509] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.703 [2024-11-20 10:40:40.426514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.703 [2024-11-20 10:40:40.426518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.703 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.963 [2024-11-20 10:40:40.436304] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.963 [2024-11-20 10:40:40.436316] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.963 [2024-11-20 10:40:40.436320] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.963 [2024-11-20 10:40:40.436326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.963 [2024-11-20 10:40:40.436340] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.963 [2024-11-20 10:40:40.436431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.963 [2024-11-20 10:40:40.436443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.963 [2024-11-20 10:40:40.436450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.963 [2024-11-20 10:40:40.436460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.963 [2024-11-20 10:40:40.436470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.963 [2024-11-20 10:40:40.436476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.963 [2024-11-20 10:40:40.436483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.963 [2024-11-20 10:40:40.436488] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.963 [2024-11-20 10:40:40.436496] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.963 [2024-11-20 10:40:40.436500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.963 [2024-11-20 10:40:40.446372] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.963 [2024-11-20 10:40:40.446386] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.963 [2024-11-20 10:40:40.446390] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.963 [2024-11-20 10:40:40.446394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.963 [2024-11-20 10:40:40.446409] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.963 [2024-11-20 10:40:40.446516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.963 [2024-11-20 10:40:40.446528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.963 [2024-11-20 10:40:40.446535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.963 [2024-11-20 10:40:40.446545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.963 [2024-11-20 10:40:40.446560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.963 [2024-11-20 10:40:40.446567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.963 [2024-11-20 10:40:40.446574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.963 [2024-11-20 10:40:40.446580] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.963 [2024-11-20 10:40:40.446584] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.963 [2024-11-20 10:40:40.446588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.963 [2024-11-20 10:40:40.456440] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:39.963 [2024-11-20 10:40:40.456450] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:39.963 [2024-11-20 10:40:40.456454] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:39.963 [2024-11-20 10:40:40.456459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.963 [2024-11-20 10:40:40.456471] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:39.963 [2024-11-20 10:40:40.456563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.963 [2024-11-20 10:40:40.456574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453390 with addr=10.0.0.2, port=4420 00:23:39.963 [2024-11-20 10:40:40.456581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453390 is same with the state(6) to be set 00:23:39.963 [2024-11-20 10:40:40.456591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453390 (9): Bad file descriptor 00:23:39.963 [2024-11-20 10:40:40.456600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:39.963 [2024-11-20 10:40:40.456606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:39.963 [2024-11-20 10:40:40.456613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:39.963 [2024-11-20 10:40:40.456621] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:39.963 [2024-11-20 10:40:40.456626] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:39.963 [2024-11-20 10:40:40.456630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:39.963 [2024-11-20 10:40:40.458240] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:39.963 [2024-11-20 10:40:40.458255] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:39.963 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.964 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.223 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.157 [2024-11-20 10:40:41.785125] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:41.157 [2024-11-20 10:40:41.785141] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:41.157 [2024-11-20 10:40:41.785153] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.157 [2024-11-20 10:40:41.871422] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:41.416 [2024-11-20 10:40:42.011240] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:41.416 [2024-11-20 10:40:42.011791] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2450b40:1 started. 00:23:41.416 [2024-11-20 10:40:42.013437] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.416 [2024-11-20 10:40:42.013462] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.416 [2024-11-20 10:40:42.014797] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2450b40 was disconnected and freed. delete nvme_qpair. 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 request: 00:23:41.416 { 00:23:41.416 "name": "nvme", 00:23:41.416 "trtype": "tcp", 00:23:41.416 "traddr": "10.0.0.2", 00:23:41.416 "adrfam": "ipv4", 00:23:41.416 "trsvcid": "8009", 00:23:41.416 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:41.416 "wait_for_attach": true, 00:23:41.416 "method": "bdev_nvme_start_discovery", 00:23:41.416 "req_id": 1 00:23:41.416 } 00:23:41.416 Got JSON-RPC error response 00:23:41.416 response: 00:23:41.416 { 00:23:41.416 "code": -17, 00:23:41.416 "message": "File exists" 00:23:41.416 } 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:41.416 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.417 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.675 request: 00:23:41.675 { 00:23:41.675 "name": "nvme_second", 00:23:41.675 "trtype": "tcp", 00:23:41.675 "traddr": "10.0.0.2", 00:23:41.675 "adrfam": "ipv4", 00:23:41.675 "trsvcid": "8009", 00:23:41.675 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:41.675 "wait_for_attach": true, 00:23:41.675 "method": "bdev_nvme_start_discovery", 00:23:41.675 "req_id": 1 00:23:41.675 } 00:23:41.675 Got JSON-RPC error response 00:23:41.675 response: 00:23:41.675 { 00:23:41.675 "code": -17, 00:23:41.675 "message": "File exists" 00:23:41.675 } 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:41.675 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.676 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.610 [2024-11-20 10:40:43.256836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.610 [2024-11-20 10:40:43.256863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2460270 with addr=10.0.0.2, port=8010 00:23:42.610 [2024-11-20 10:40:43.256875] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:42.610 [2024-11-20 10:40:43.256882] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:42.610 [2024-11-20 10:40:43.256888] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:43.544 [2024-11-20 10:40:44.259348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.544 [2024-11-20 10:40:44.259372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2460270 with addr=10.0.0.2, port=8010 00:23:43.544 [2024-11-20 10:40:44.259383] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:43.544 [2024-11-20 10:40:44.259389] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:43.544 [2024-11-20 10:40:44.259395] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:44.953 [2024-11-20 10:40:45.261537] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:44.953 request: 00:23:44.953 { 00:23:44.953 "name": "nvme_second", 00:23:44.953 "trtype": "tcp", 00:23:44.953 "traddr": "10.0.0.2", 00:23:44.953 "adrfam": "ipv4", 00:23:44.953 "trsvcid": "8010", 00:23:44.953 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.953 "wait_for_attach": false, 00:23:44.953 "attach_timeout_ms": 3000, 00:23:44.953 "method": "bdev_nvme_start_discovery", 00:23:44.953 "req_id": 1 00:23:44.953 } 00:23:44.953 Got JSON-RPC error response 00:23:44.953 response: 00:23:44.953 { 00:23:44.953 "code": -110, 00:23:44.953 "message": "Connection timed out" 00:23:44.953 } 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3590783 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.953 rmmod nvme_tcp 00:23:44.953 rmmod nvme_fabrics 00:23:44.953 rmmod nvme_keyring 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3590752 ']' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3590752 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3590752 ']' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3590752 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3590752 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3590752' 00:23:44.953 killing process with pid 3590752 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3590752 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3590752 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.953 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.489 00:23:47.489 real 0m17.313s 00:23:47.489 user 0m20.664s 00:23:47.489 sys 0m5.889s 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.489 ************************************ 00:23:47.489 END TEST nvmf_host_discovery 00:23:47.489 ************************************ 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.489 ************************************ 00:23:47.489 START TEST nvmf_host_multipath_status 00:23:47.489 ************************************ 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:47.489 * Looking for test storage... 00:23:47.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.489 --rc genhtml_branch_coverage=1 00:23:47.489 --rc genhtml_function_coverage=1 00:23:47.489 --rc genhtml_legend=1 00:23:47.489 --rc geninfo_all_blocks=1 00:23:47.489 --rc geninfo_unexecuted_blocks=1 00:23:47.489 00:23:47.489 ' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.489 --rc genhtml_branch_coverage=1 00:23:47.489 --rc genhtml_function_coverage=1 00:23:47.489 --rc genhtml_legend=1 00:23:47.489 --rc geninfo_all_blocks=1 00:23:47.489 --rc geninfo_unexecuted_blocks=1 00:23:47.489 00:23:47.489 ' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.489 --rc genhtml_branch_coverage=1 00:23:47.489 --rc genhtml_function_coverage=1 00:23:47.489 --rc genhtml_legend=1 00:23:47.489 --rc geninfo_all_blocks=1 00:23:47.489 --rc geninfo_unexecuted_blocks=1 00:23:47.489 00:23:47.489 ' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.489 --rc genhtml_branch_coverage=1 00:23:47.489 --rc genhtml_function_coverage=1 00:23:47.489 --rc genhtml_legend=1 00:23:47.489 --rc geninfo_all_blocks=1 00:23:47.489 --rc geninfo_unexecuted_blocks=1 00:23:47.489 00:23:47.489 ' 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.489 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.490 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:54.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:54.054 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.054 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:54.055 Found net devices under 0000:86:00.0: cvl_0_0 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:54.055 Found net devices under 0000:86:00.1: cvl_0_1 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:23:54.055 00:23:54.055 --- 10.0.0.2 ping statistics --- 00:23:54.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.055 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:23:54.055 00:23:54.055 --- 10.0.0.1 ping statistics --- 00:23:54.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.055 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3595854 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3595854 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3595854 ']' 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.055 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:54.055 [2024-11-20 10:40:53.953194] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:23:54.055 [2024-11-20 10:40:53.953249] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.056 [2024-11-20 10:40:54.032909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:54.056 [2024-11-20 10:40:54.074968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.056 [2024-11-20 10:40:54.075006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.056 [2024-11-20 10:40:54.075014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.056 [2024-11-20 10:40:54.075020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.056 [2024-11-20 10:40:54.075025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.056 [2024-11-20 10:40:54.076254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.056 [2024-11-20 10:40:54.076258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3595854 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:54.056 [2024-11-20 10:40:54.373309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:54.056 Malloc0 00:23:54.056 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:54.313 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.313 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.572 [2024-11-20 10:40:55.209382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.572 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.829 [2024-11-20 10:40:55.405881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3596115 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3596115 /var/tmp/bdevperf.sock 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3596115 ']' 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.830 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.088 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.088 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:55.088 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:55.346 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:55.603 Nvme0n1 00:23:55.861 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:56.119 Nvme0n1 00:23:56.119 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:56.119 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:58.024 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:58.024 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:58.283 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:58.542 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:59.478 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:59.478 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.478 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.478 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.737 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.737 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.737 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.737 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.996 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.996 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.996 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.996 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.254 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.254 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.254 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.254 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.513 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.513 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.513 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.513 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:00.773 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.032 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.290 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:02.227 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:02.227 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:02.227 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.227 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.486 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.486 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:02.486 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.486 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.745 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.745 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.745 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.745 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.004 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.004 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.004 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.004 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.262 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.520 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.520 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:03.520 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.778 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:04.037 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:04.971 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:04.971 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:04.971 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.971 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.230 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.230 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.230 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.230 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.487 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.487 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.487 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.487 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.745 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.745 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.745 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.745 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.745 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.745 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:05.746 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.746 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.004 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.004 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.004 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.004 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.262 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.262 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:06.262 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:06.519 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:06.777 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:07.712 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:07.712 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.712 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.712 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.972 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.972 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:07.972 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.972 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.230 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.230 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.230 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.230 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.488 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.488 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.488 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.488 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.488 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.488 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.488 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.488 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.747 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.747 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:08.747 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.747 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.004 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.005 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:09.005 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:09.263 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:09.521 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:10.455 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:10.455 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.455 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.455 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.713 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:10.971 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.972 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:10.972 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:10.972 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.231 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.231 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:11.231 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.231 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:11.489 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:11.748 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.007 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:12.942 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:12.942 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:12.942 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.942 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.201 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.201 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.201 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.201 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.460 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.460 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.460 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.460 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.718 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.718 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.718 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.718 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.976 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.235 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.235 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:14.493 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:14.493 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:14.751 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.010 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:15.945 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:15.945 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.945 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.945 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.203 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.203 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.203 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.203 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.460 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.460 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.460 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.460 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.460 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.460 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.460 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.461 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.719 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.719 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.719 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.719 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:17.116 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.421 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.680 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:18.612 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:18.612 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:18.612 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.612 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.871 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.871 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.871 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.871 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.128 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.128 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.128 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.128 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.384 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.384 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.384 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.384 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.384 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.384 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.384 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.384 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.642 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.642 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.642 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.642 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.901 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.901 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:19.901 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.159 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:20.417 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:21.352 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:21.352 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.352 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.352 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.611 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.611 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:21.611 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.611 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.870 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.128 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.128 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.128 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.128 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.386 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.386 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.386 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.386 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.645 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.645 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:22.645 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.903 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:23.161 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:24.097 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:24.097 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.097 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.097 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.356 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.356 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:24.356 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.356 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.356 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.356 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.356 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.356 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.615 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.615 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.615 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.615 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.874 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.874 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.874 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.874 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.132 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.132 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:25.132 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.132 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3596115 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3596115 ']' 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3596115 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596115 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596115' 00:24:25.391 killing process with pid 3596115 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3596115 00:24:25.391 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3596115 00:24:25.391 { 00:24:25.391 "results": [ 00:24:25.391 { 00:24:25.391 "job": "Nvme0n1", 00:24:25.391 "core_mask": "0x4", 00:24:25.391 "workload": "verify", 00:24:25.391 "status": "terminated", 00:24:25.391 "verify_range": { 00:24:25.391 "start": 0, 00:24:25.391 "length": 16384 00:24:25.391 }, 00:24:25.391 "queue_depth": 128, 00:24:25.391 "io_size": 4096, 00:24:25.391 "runtime": 29.092038, 00:24:25.391 "iops": 10499.745669244623, 00:24:25.392 "mibps": 41.01463152048681, 00:24:25.392 "io_failed": 0, 00:24:25.392 "io_timeout": 0, 00:24:25.392 "avg_latency_us": 12170.343579044338, 00:24:25.392 "min_latency_us": 116.64695652173913, 00:24:25.392 "max_latency_us": 3019898.88 00:24:25.392 } 00:24:25.392 ], 00:24:25.392 "core_count": 1 00:24:25.392 } 00:24:25.654 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3596115 00:24:25.654 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.654 [2024-11-20 10:40:55.469736] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:24:25.654 [2024-11-20 10:40:55.469792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596115 ] 00:24:25.654 [2024-11-20 10:40:55.542492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.654 [2024-11-20 10:40:55.584085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.654 Running I/O for 90 seconds... 00:24:25.654 11389.00 IOPS, 44.49 MiB/s [2024-11-20T09:41:26.385Z] 11313.00 IOPS, 44.19 MiB/s [2024-11-20T09:41:26.385Z] 11367.00 IOPS, 44.40 MiB/s [2024-11-20T09:41:26.385Z] 11302.25 IOPS, 44.15 MiB/s [2024-11-20T09:41:26.385Z] 11336.40 IOPS, 44.28 MiB/s [2024-11-20T09:41:26.385Z] 11340.33 IOPS, 44.30 MiB/s [2024-11-20T09:41:26.385Z] 11353.14 IOPS, 44.35 MiB/s [2024-11-20T09:41:26.385Z] 11336.12 IOPS, 44.28 MiB/s [2024-11-20T09:41:26.385Z] 11327.22 IOPS, 44.25 MiB/s [2024-11-20T09:41:26.385Z] 11328.90 IOPS, 44.25 MiB/s [2024-11-20T09:41:26.385Z] 11332.36 IOPS, 44.27 MiB/s [2024-11-20T09:41:26.385Z] 11315.75 IOPS, 44.20 MiB/s [2024-11-20T09:41:26.385Z] [2024-11-20 10:41:09.790364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.790618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.790627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.791038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.654 [2024-11-20 10:41:09.791063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:25.654 [2024-11-20 10:41:09.791079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.655 [2024-11-20 10:41:09.791895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.655 [2024-11-20 10:41:09.791902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.791916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.791923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.791946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.791966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.791974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.791987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.791995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.656 [2024-11-20 10:41:09.792728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.656 [2024-11-20 10:41:09.792753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:25.656 [2024-11-20 10:41:09.792893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.656 [2024-11-20 10:41:09.792901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.792917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.792925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.792942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.792955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.792974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.792983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.792999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.657 [2024-11-20 10:41:09.793697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.657 [2024-11-20 10:41:09.793721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.657 [2024-11-20 10:41:09.793745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.657 [2024-11-20 10:41:09.793769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.657 [2024-11-20 10:41:09.793793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.657 [2024-11-20 10:41:09.793817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.657 [2024-11-20 10:41:09.793841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:25.657 [2024-11-20 10:41:09.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.658 [2024-11-20 10:41:09.793866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:25.658 11258.46 IOPS, 43.98 MiB/s [2024-11-20T09:41:26.389Z] 10454.29 IOPS, 40.84 MiB/s [2024-11-20T09:41:26.389Z] 9757.33 IOPS, 38.11 MiB/s [2024-11-20T09:41:26.389Z] 9189.19 IOPS, 35.90 MiB/s [2024-11-20T09:41:26.389Z] 9315.53 IOPS, 36.39 MiB/s [2024-11-20T09:41:26.389Z] 9425.67 IOPS, 36.82 MiB/s [2024-11-20T09:41:26.389Z] 9576.37 IOPS, 37.41 MiB/s [2024-11-20T09:41:26.389Z] 9769.00 IOPS, 38.16 MiB/s [2024-11-20T09:41:26.389Z] 9944.33 IOPS, 38.85 MiB/s [2024-11-20T09:41:26.389Z] 10020.05 IOPS, 39.14 MiB/s [2024-11-20T09:41:26.389Z] 10066.13 IOPS, 39.32 MiB/s [2024-11-20T09:41:26.389Z] 10108.67 IOPS, 39.49 MiB/s [2024-11-20T09:41:26.389Z] 10227.88 IOPS, 39.95 MiB/s [2024-11-20T09:41:26.389Z] 10350.88 IOPS, 40.43 MiB/s [2024-11-20T09:41:26.389Z] [2024-11-20 10:41:23.622040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.658 [2024-11-20 10:41:23.622390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.622678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.622685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.623576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.623597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.623613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.623620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.623641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.623654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.623662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:25.658 [2024-11-20 10:41:23.623674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.658 [2024-11-20 10:41:23.623681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.623886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.623907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.623927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.623940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.623953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.659 [2024-11-20 10:41:23.624489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:25.659 [2024-11-20 10:41:23.624586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.659 [2024-11-20 10:41:23.624593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:25.660 [2024-11-20 10:41:23.624606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.660 [2024-11-20 10:41:23.624613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:25.660 [2024-11-20 10:41:23.624626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.660 [2024-11-20 10:41:23.624633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:25.660 10436.96 IOPS, 40.77 MiB/s [2024-11-20T09:41:26.391Z] 10464.39 IOPS, 40.88 MiB/s [2024-11-20T09:41:26.391Z] 10497.07 IOPS, 41.00 MiB/s [2024-11-20T09:41:26.391Z] Received shutdown signal, test time was about 29.092686 seconds 00:24:25.660 00:24:25.660 Latency(us) 00:24:25.660 [2024-11-20T09:41:26.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.660 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:25.660 Verification LBA range: start 0x0 length 0x4000 00:24:25.660 Nvme0n1 : 29.09 10499.75 41.01 0.00 0.00 12170.34 116.65 3019898.88 00:24:25.660 [2024-11-20T09:41:26.391Z] =================================================================================================================== 00:24:25.660 [2024-11-20T09:41:26.391Z] Total : 10499.75 41.01 0.00 0.00 12170.34 116.65 3019898.88 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.660 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.660 rmmod nvme_tcp 00:24:25.919 rmmod nvme_fabrics 00:24:25.919 rmmod nvme_keyring 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3595854 ']' 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3595854 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3595854 ']' 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3595854 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3595854 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3595854' 00:24:25.919 killing process with pid 3595854 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3595854 00:24:25.919 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3595854 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.178 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.090 00:24:28.090 real 0m40.991s 00:24:28.090 user 1m51.344s 00:24:28.090 sys 0m11.685s 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:28.090 ************************************ 00:24:28.090 END TEST nvmf_host_multipath_status 00:24:28.090 ************************************ 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.090 ************************************ 00:24:28.090 START TEST nvmf_discovery_remove_ifc 00:24:28.090 ************************************ 00:24:28.090 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:28.351 * Looking for test storage... 00:24:28.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.351 --rc genhtml_branch_coverage=1 00:24:28.351 --rc genhtml_function_coverage=1 00:24:28.351 --rc genhtml_legend=1 00:24:28.351 --rc geninfo_all_blocks=1 00:24:28.351 --rc geninfo_unexecuted_blocks=1 00:24:28.351 00:24:28.351 ' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.351 --rc genhtml_branch_coverage=1 00:24:28.351 --rc genhtml_function_coverage=1 00:24:28.351 --rc genhtml_legend=1 00:24:28.351 --rc geninfo_all_blocks=1 00:24:28.351 --rc geninfo_unexecuted_blocks=1 00:24:28.351 00:24:28.351 ' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.351 --rc genhtml_branch_coverage=1 00:24:28.351 --rc genhtml_function_coverage=1 00:24:28.351 --rc genhtml_legend=1 00:24:28.351 --rc geninfo_all_blocks=1 00:24:28.351 --rc geninfo_unexecuted_blocks=1 00:24:28.351 00:24:28.351 ' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.351 --rc genhtml_branch_coverage=1 00:24:28.351 --rc genhtml_function_coverage=1 00:24:28.351 --rc genhtml_legend=1 00:24:28.351 --rc geninfo_all_blocks=1 00:24:28.351 --rc geninfo_unexecuted_blocks=1 00:24:28.351 00:24:28.351 ' 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.351 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.351 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.352 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.920 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:34.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:34.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:34.921 Found net devices under 0000:86:00.0: cvl_0_0 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:34.921 Found net devices under 0000:86:00.1: cvl_0_1 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:24:34.921 00:24:34.921 --- 10.0.0.2 ping statistics --- 00:24:34.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.921 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:24:34.921 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:34.922 00:24:34.922 --- 10.0.0.1 ping statistics --- 00:24:34.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.922 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3604810 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3604810 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3604810 ']' 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.922 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 [2024-11-20 10:41:34.964835] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:24:34.922 [2024-11-20 10:41:34.964885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.922 [2024-11-20 10:41:35.044856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.922 [2024-11-20 10:41:35.085878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.922 [2024-11-20 10:41:35.085916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.922 [2024-11-20 10:41:35.085924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.922 [2024-11-20 10:41:35.085930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.922 [2024-11-20 10:41:35.085935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.922 [2024-11-20 10:41:35.086527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 [2024-11-20 10:41:35.233873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.922 [2024-11-20 10:41:35.242048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:34.922 null0 00:24:34.922 [2024-11-20 10:41:35.274040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3604897 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3604897 /tmp/host.sock 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3604897 ']' 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:34.922 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 [2024-11-20 10:41:35.342368] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:24:34.922 [2024-11-20 10:41:35.342409] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604897 ] 00:24:34.922 [2024-11-20 10:41:35.415559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.922 [2024-11-20 10:41:35.458856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.922 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.300 [2024-11-20 10:41:36.641452] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:36.300 [2024-11-20 10:41:36.641474] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:36.300 [2024-11-20 10:41:36.641491] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.300 [2024-11-20 10:41:36.768880] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:36.300 [2024-11-20 10:41:36.995039] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:36.300 [2024-11-20 10:41:36.995805] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xed79f0:1 started. 00:24:36.300 [2024-11-20 10:41:36.997154] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:36.300 [2024-11-20 10:41:36.997194] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:36.300 [2024-11-20 10:41:36.997213] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:36.300 [2024-11-20 10:41:36.997225] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:36.300 [2024-11-20 10:41:36.997243] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:36.300 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.300 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:36.300 [2024-11-20 10:41:36.999688] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xed79f0 was disconnected and freed. delete nvme_qpair. 00:24:36.300 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.300 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.557 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.491 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.492 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.750 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.750 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.685 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.620 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.996 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.929 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.930 [2024-11-20 10:41:42.438801] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:41.930 [2024-11-20 10:41:42.438839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.930 [2024-11-20 10:41:42.438851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.930 [2024-11-20 10:41:42.438860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.930 [2024-11-20 10:41:42.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.930 [2024-11-20 10:41:42.438875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.930 [2024-11-20 10:41:42.438882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.930 [2024-11-20 10:41:42.438889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.930 [2024-11-20 10:41:42.438896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.930 [2024-11-20 10:41:42.438904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.930 [2024-11-20 10:41:42.438910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.930 [2024-11-20 10:41:42.438917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4220 is same with the state(6) to be set 00:24:41.930 [2024-11-20 10:41:42.448821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4220 (9): Bad file descriptor 00:24:41.930 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.930 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.930 [2024-11-20 10:41:42.458856] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:41.930 [2024-11-20 10:41:42.458869] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:41.930 [2024-11-20 10:41:42.458874] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:41.930 [2024-11-20 10:41:42.458882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:41.930 [2024-11-20 10:41:42.458902] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.866 [2024-11-20 10:41:43.472016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:42.866 [2024-11-20 10:41:43.472084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4220 with addr=10.0.0.2, port=4420 00:24:42.866 [2024-11-20 10:41:43.472114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4220 is same with the state(6) to be set 00:24:42.866 [2024-11-20 10:41:43.472166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4220 (9): Bad file descriptor 00:24:42.866 [2024-11-20 10:41:43.473112] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:42.866 [2024-11-20 10:41:43.473175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.866 [2024-11-20 10:41:43.473198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.866 [2024-11-20 10:41:43.473221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.866 [2024-11-20 10:41:43.473241] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.866 [2024-11-20 10:41:43.473257] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.866 [2024-11-20 10:41:43.473271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.866 [2024-11-20 10:41:43.473292] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.866 [2024-11-20 10:41:43.473306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.866 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.803 [2024-11-20 10:41:44.475827] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:43.803 [2024-11-20 10:41:44.475847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:43.803 [2024-11-20 10:41:44.475858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:43.803 [2024-11-20 10:41:44.475865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:43.803 [2024-11-20 10:41:44.475871] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:43.803 [2024-11-20 10:41:44.475878] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:43.803 [2024-11-20 10:41:44.475887] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:43.803 [2024-11-20 10:41:44.475891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:43.803 [2024-11-20 10:41:44.475912] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:43.803 [2024-11-20 10:41:44.475932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.803 [2024-11-20 10:41:44.475941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.803 [2024-11-20 10:41:44.475954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.803 [2024-11-20 10:41:44.475962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.803 [2024-11-20 10:41:44.475969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.804 [2024-11-20 10:41:44.475975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.804 [2024-11-20 10:41:44.475983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.804 [2024-11-20 10:41:44.475989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.804 [2024-11-20 10:41:44.475997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.804 [2024-11-20 10:41:44.476004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.804 [2024-11-20 10:41:44.476010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:43.804 [2024-11-20 10:41:44.476510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3900 (9): Bad file descriptor 00:24:43.804 [2024-11-20 10:41:44.477521] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:43.804 [2024-11-20 10:41:44.477533] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.804 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.061 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:44.062 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.997 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.255 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:45.255 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.823 [2024-11-20 10:41:46.527378] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:45.823 [2024-11-20 10:41:46.527396] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:45.823 [2024-11-20 10:41:46.527408] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:46.081 [2024-11-20 10:41:46.653807] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:46.081 10:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.341 [2024-11-20 10:41:46.876956] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:46.341 [2024-11-20 10:41:46.877561] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xea8760:1 started. 00:24:46.341 [2024-11-20 10:41:46.878630] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:46.341 [2024-11-20 10:41:46.878662] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:46.341 [2024-11-20 10:41:46.878679] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:46.341 [2024-11-20 10:41:46.878691] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:46.341 [2024-11-20 10:41:46.878698] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:46.341 [2024-11-20 10:41:46.885050] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xea8760 was disconnected and freed. delete nvme_qpair. 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3604897 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3604897 ']' 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3604897 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604897 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.277 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604897' 00:24:47.277 killing process with pid 3604897 00:24:47.278 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3604897 00:24:47.278 10:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3604897 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.536 rmmod nvme_tcp 00:24:47.536 rmmod nvme_fabrics 00:24:47.536 rmmod nvme_keyring 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3604810 ']' 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3604810 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3604810 ']' 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3604810 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604810 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604810' 00:24:47.536 killing process with pid 3604810 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3604810 00:24:47.536 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3604810 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.794 10:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.701 00:24:49.701 real 0m21.570s 00:24:49.701 user 0m26.999s 00:24:49.701 sys 0m5.890s 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.701 ************************************ 00:24:49.701 END TEST nvmf_discovery_remove_ifc 00:24:49.701 ************************************ 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.701 10:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.960 ************************************ 00:24:49.960 START TEST nvmf_identify_kernel_target 00:24:49.960 ************************************ 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:49.961 * Looking for test storage... 00:24:49.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.961 --rc genhtml_branch_coverage=1 00:24:49.961 --rc genhtml_function_coverage=1 00:24:49.961 --rc genhtml_legend=1 00:24:49.961 --rc geninfo_all_blocks=1 00:24:49.961 --rc geninfo_unexecuted_blocks=1 00:24:49.961 00:24:49.961 ' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.961 --rc genhtml_branch_coverage=1 00:24:49.961 --rc genhtml_function_coverage=1 00:24:49.961 --rc genhtml_legend=1 00:24:49.961 --rc geninfo_all_blocks=1 00:24:49.961 --rc geninfo_unexecuted_blocks=1 00:24:49.961 00:24:49.961 ' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.961 --rc genhtml_branch_coverage=1 00:24:49.961 --rc genhtml_function_coverage=1 00:24:49.961 --rc genhtml_legend=1 00:24:49.961 --rc geninfo_all_blocks=1 00:24:49.961 --rc geninfo_unexecuted_blocks=1 00:24:49.961 00:24:49.961 ' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.961 --rc genhtml_branch_coverage=1 00:24:49.961 --rc genhtml_function_coverage=1 00:24:49.961 --rc genhtml_legend=1 00:24:49.961 --rc geninfo_all_blocks=1 00:24:49.961 --rc geninfo_unexecuted_blocks=1 00:24:49.961 00:24:49.961 ' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.961 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.962 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:56.529 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:56.529 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:56.529 Found net devices under 0000:86:00.0: cvl_0_0 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:56.529 Found net devices under 0000:86:00.1: cvl_0_1 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.529 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:24:56.530 00:24:56.530 --- 10.0.0.2 ping statistics --- 00:24:56.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.530 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:56.530 00:24:56.530 --- 10.0.0.1 ping statistics --- 00:24:56.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.530 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:56.530 10:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:59.063 Waiting for block devices as requested 00:24:59.063 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:59.063 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.063 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:59.063 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:59.063 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:59.063 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:59.322 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:59.322 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:59.322 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:59.581 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.581 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:59.581 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:59.581 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:59.840 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:59.840 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:59.840 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:00.100 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:00.100 No valid GPT data, bailing 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:00.100 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:00.101 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:00.361 00:25:00.361 Discovery Log Number of Records 2, Generation counter 2 00:25:00.361 =====Discovery Log Entry 0====== 00:25:00.361 trtype: tcp 00:25:00.361 adrfam: ipv4 00:25:00.361 subtype: current discovery subsystem 00:25:00.361 treq: not specified, sq flow control disable supported 00:25:00.361 portid: 1 00:25:00.361 trsvcid: 4420 00:25:00.361 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:00.361 traddr: 10.0.0.1 00:25:00.361 eflags: none 00:25:00.361 sectype: none 00:25:00.361 =====Discovery Log Entry 1====== 00:25:00.361 trtype: tcp 00:25:00.361 adrfam: ipv4 00:25:00.361 subtype: nvme subsystem 00:25:00.361 treq: not specified, sq flow control disable supported 00:25:00.361 portid: 1 00:25:00.361 trsvcid: 4420 00:25:00.361 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:00.361 traddr: 10.0.0.1 00:25:00.361 eflags: none 00:25:00.362 sectype: none 00:25:00.362 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:00.362 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:00.362 ===================================================== 00:25:00.362 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:00.362 ===================================================== 00:25:00.362 Controller Capabilities/Features 00:25:00.362 ================================ 00:25:00.362 Vendor ID: 0000 00:25:00.362 Subsystem Vendor ID: 0000 00:25:00.362 Serial Number: e738cd1bc067051d737a 00:25:00.362 Model Number: Linux 00:25:00.362 Firmware Version: 6.8.9-20 00:25:00.362 Recommended Arb Burst: 0 00:25:00.362 IEEE OUI Identifier: 00 00 00 00:25:00.362 Multi-path I/O 00:25:00.362 May have multiple subsystem ports: No 00:25:00.362 May have multiple controllers: No 00:25:00.362 Associated with SR-IOV VF: No 00:25:00.362 Max Data Transfer Size: Unlimited 00:25:00.362 Max Number of Namespaces: 0 00:25:00.362 Max Number of I/O Queues: 1024 00:25:00.362 NVMe Specification Version (VS): 1.3 00:25:00.362 NVMe Specification Version (Identify): 1.3 00:25:00.362 Maximum Queue Entries: 1024 00:25:00.362 Contiguous Queues Required: No 00:25:00.362 Arbitration Mechanisms Supported 00:25:00.362 Weighted Round Robin: Not Supported 00:25:00.362 Vendor Specific: Not Supported 00:25:00.362 Reset Timeout: 7500 ms 00:25:00.362 Doorbell Stride: 4 bytes 00:25:00.362 NVM Subsystem Reset: Not Supported 00:25:00.362 Command Sets Supported 00:25:00.362 NVM Command Set: Supported 00:25:00.362 Boot Partition: Not Supported 00:25:00.362 Memory Page Size Minimum: 4096 bytes 00:25:00.362 Memory Page Size Maximum: 4096 bytes 00:25:00.362 Persistent Memory Region: Not Supported 00:25:00.362 Optional Asynchronous Events Supported 00:25:00.362 Namespace Attribute Notices: Not Supported 00:25:00.362 Firmware Activation Notices: Not Supported 00:25:00.362 ANA Change Notices: Not Supported 00:25:00.362 PLE Aggregate Log Change Notices: Not Supported 00:25:00.362 LBA Status Info Alert Notices: Not Supported 00:25:00.362 EGE Aggregate Log Change Notices: Not Supported 00:25:00.362 Normal NVM Subsystem Shutdown event: Not Supported 00:25:00.362 Zone Descriptor Change Notices: Not Supported 00:25:00.362 Discovery Log Change Notices: Supported 00:25:00.362 Controller Attributes 00:25:00.362 128-bit Host Identifier: Not Supported 00:25:00.362 Non-Operational Permissive Mode: Not Supported 00:25:00.362 NVM Sets: Not Supported 00:25:00.362 Read Recovery Levels: Not Supported 00:25:00.362 Endurance Groups: Not Supported 00:25:00.362 Predictable Latency Mode: Not Supported 00:25:00.362 Traffic Based Keep ALive: Not Supported 00:25:00.362 Namespace Granularity: Not Supported 00:25:00.362 SQ Associations: Not Supported 00:25:00.362 UUID List: Not Supported 00:25:00.362 Multi-Domain Subsystem: Not Supported 00:25:00.362 Fixed Capacity Management: Not Supported 00:25:00.362 Variable Capacity Management: Not Supported 00:25:00.362 Delete Endurance Group: Not Supported 00:25:00.362 Delete NVM Set: Not Supported 00:25:00.362 Extended LBA Formats Supported: Not Supported 00:25:00.362 Flexible Data Placement Supported: Not Supported 00:25:00.362 00:25:00.362 Controller Memory Buffer Support 00:25:00.362 ================================ 00:25:00.362 Supported: No 00:25:00.362 00:25:00.362 Persistent Memory Region Support 00:25:00.362 ================================ 00:25:00.362 Supported: No 00:25:00.362 00:25:00.362 Admin Command Set Attributes 00:25:00.362 ============================ 00:25:00.362 Security Send/Receive: Not Supported 00:25:00.362 Format NVM: Not Supported 00:25:00.362 Firmware Activate/Download: Not Supported 00:25:00.362 Namespace Management: Not Supported 00:25:00.362 Device Self-Test: Not Supported 00:25:00.362 Directives: Not Supported 00:25:00.362 NVMe-MI: Not Supported 00:25:00.362 Virtualization Management: Not Supported 00:25:00.362 Doorbell Buffer Config: Not Supported 00:25:00.362 Get LBA Status Capability: Not Supported 00:25:00.362 Command & Feature Lockdown Capability: Not Supported 00:25:00.362 Abort Command Limit: 1 00:25:00.362 Async Event Request Limit: 1 00:25:00.362 Number of Firmware Slots: N/A 00:25:00.362 Firmware Slot 1 Read-Only: N/A 00:25:00.362 Firmware Activation Without Reset: N/A 00:25:00.362 Multiple Update Detection Support: N/A 00:25:00.362 Firmware Update Granularity: No Information Provided 00:25:00.362 Per-Namespace SMART Log: No 00:25:00.362 Asymmetric Namespace Access Log Page: Not Supported 00:25:00.362 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:00.362 Command Effects Log Page: Not Supported 00:25:00.362 Get Log Page Extended Data: Supported 00:25:00.362 Telemetry Log Pages: Not Supported 00:25:00.362 Persistent Event Log Pages: Not Supported 00:25:00.362 Supported Log Pages Log Page: May Support 00:25:00.362 Commands Supported & Effects Log Page: Not Supported 00:25:00.362 Feature Identifiers & Effects Log Page:May Support 00:25:00.362 NVMe-MI Commands & Effects Log Page: May Support 00:25:00.362 Data Area 4 for Telemetry Log: Not Supported 00:25:00.362 Error Log Page Entries Supported: 1 00:25:00.362 Keep Alive: Not Supported 00:25:00.362 00:25:00.362 NVM Command Set Attributes 00:25:00.362 ========================== 00:25:00.362 Submission Queue Entry Size 00:25:00.362 Max: 1 00:25:00.362 Min: 1 00:25:00.362 Completion Queue Entry Size 00:25:00.362 Max: 1 00:25:00.362 Min: 1 00:25:00.362 Number of Namespaces: 0 00:25:00.362 Compare Command: Not Supported 00:25:00.362 Write Uncorrectable Command: Not Supported 00:25:00.362 Dataset Management Command: Not Supported 00:25:00.362 Write Zeroes Command: Not Supported 00:25:00.362 Set Features Save Field: Not Supported 00:25:00.362 Reservations: Not Supported 00:25:00.362 Timestamp: Not Supported 00:25:00.362 Copy: Not Supported 00:25:00.362 Volatile Write Cache: Not Present 00:25:00.362 Atomic Write Unit (Normal): 1 00:25:00.362 Atomic Write Unit (PFail): 1 00:25:00.362 Atomic Compare & Write Unit: 1 00:25:00.362 Fused Compare & Write: Not Supported 00:25:00.362 Scatter-Gather List 00:25:00.362 SGL Command Set: Supported 00:25:00.362 SGL Keyed: Not Supported 00:25:00.362 SGL Bit Bucket Descriptor: Not Supported 00:25:00.362 SGL Metadata Pointer: Not Supported 00:25:00.362 Oversized SGL: Not Supported 00:25:00.362 SGL Metadata Address: Not Supported 00:25:00.362 SGL Offset: Supported 00:25:00.362 Transport SGL Data Block: Not Supported 00:25:00.362 Replay Protected Memory Block: Not Supported 00:25:00.362 00:25:00.362 Firmware Slot Information 00:25:00.362 ========================= 00:25:00.362 Active slot: 0 00:25:00.362 00:25:00.362 00:25:00.362 Error Log 00:25:00.362 ========= 00:25:00.362 00:25:00.362 Active Namespaces 00:25:00.362 ================= 00:25:00.362 Discovery Log Page 00:25:00.362 ================== 00:25:00.362 Generation Counter: 2 00:25:00.362 Number of Records: 2 00:25:00.363 Record Format: 0 00:25:00.363 00:25:00.363 Discovery Log Entry 0 00:25:00.363 ---------------------- 00:25:00.363 Transport Type: 3 (TCP) 00:25:00.363 Address Family: 1 (IPv4) 00:25:00.363 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:00.363 Entry Flags: 00:25:00.363 Duplicate Returned Information: 0 00:25:00.363 Explicit Persistent Connection Support for Discovery: 0 00:25:00.363 Transport Requirements: 00:25:00.363 Secure Channel: Not Specified 00:25:00.363 Port ID: 1 (0x0001) 00:25:00.363 Controller ID: 65535 (0xffff) 00:25:00.363 Admin Max SQ Size: 32 00:25:00.363 Transport Service Identifier: 4420 00:25:00.363 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:00.363 Transport Address: 10.0.0.1 00:25:00.363 Discovery Log Entry 1 00:25:00.363 ---------------------- 00:25:00.363 Transport Type: 3 (TCP) 00:25:00.363 Address Family: 1 (IPv4) 00:25:00.363 Subsystem Type: 2 (NVM Subsystem) 00:25:00.363 Entry Flags: 00:25:00.363 Duplicate Returned Information: 0 00:25:00.363 Explicit Persistent Connection Support for Discovery: 0 00:25:00.363 Transport Requirements: 00:25:00.363 Secure Channel: Not Specified 00:25:00.363 Port ID: 1 (0x0001) 00:25:00.363 Controller ID: 65535 (0xffff) 00:25:00.363 Admin Max SQ Size: 32 00:25:00.363 Transport Service Identifier: 4420 00:25:00.363 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:00.363 Transport Address: 10.0.0.1 00:25:00.363 10:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:00.363 get_feature(0x01) failed 00:25:00.363 get_feature(0x02) failed 00:25:00.363 get_feature(0x04) failed 00:25:00.363 ===================================================== 00:25:00.363 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:00.363 ===================================================== 00:25:00.363 Controller Capabilities/Features 00:25:00.363 ================================ 00:25:00.363 Vendor ID: 0000 00:25:00.363 Subsystem Vendor ID: 0000 00:25:00.363 Serial Number: e95e526203b844b7602b 00:25:00.363 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:00.363 Firmware Version: 6.8.9-20 00:25:00.363 Recommended Arb Burst: 6 00:25:00.363 IEEE OUI Identifier: 00 00 00 00:25:00.363 Multi-path I/O 00:25:00.363 May have multiple subsystem ports: Yes 00:25:00.363 May have multiple controllers: Yes 00:25:00.363 Associated with SR-IOV VF: No 00:25:00.363 Max Data Transfer Size: Unlimited 00:25:00.363 Max Number of Namespaces: 1024 00:25:00.363 Max Number of I/O Queues: 128 00:25:00.363 NVMe Specification Version (VS): 1.3 00:25:00.363 NVMe Specification Version (Identify): 1.3 00:25:00.363 Maximum Queue Entries: 1024 00:25:00.363 Contiguous Queues Required: No 00:25:00.363 Arbitration Mechanisms Supported 00:25:00.363 Weighted Round Robin: Not Supported 00:25:00.363 Vendor Specific: Not Supported 00:25:00.363 Reset Timeout: 7500 ms 00:25:00.363 Doorbell Stride: 4 bytes 00:25:00.363 NVM Subsystem Reset: Not Supported 00:25:00.363 Command Sets Supported 00:25:00.363 NVM Command Set: Supported 00:25:00.363 Boot Partition: Not Supported 00:25:00.363 Memory Page Size Minimum: 4096 bytes 00:25:00.363 Memory Page Size Maximum: 4096 bytes 00:25:00.363 Persistent Memory Region: Not Supported 00:25:00.363 Optional Asynchronous Events Supported 00:25:00.363 Namespace Attribute Notices: Supported 00:25:00.363 Firmware Activation Notices: Not Supported 00:25:00.363 ANA Change Notices: Supported 00:25:00.363 PLE Aggregate Log Change Notices: Not Supported 00:25:00.363 LBA Status Info Alert Notices: Not Supported 00:25:00.363 EGE Aggregate Log Change Notices: Not Supported 00:25:00.363 Normal NVM Subsystem Shutdown event: Not Supported 00:25:00.363 Zone Descriptor Change Notices: Not Supported 00:25:00.363 Discovery Log Change Notices: Not Supported 00:25:00.363 Controller Attributes 00:25:00.363 128-bit Host Identifier: Supported 00:25:00.363 Non-Operational Permissive Mode: Not Supported 00:25:00.363 NVM Sets: Not Supported 00:25:00.363 Read Recovery Levels: Not Supported 00:25:00.363 Endurance Groups: Not Supported 00:25:00.363 Predictable Latency Mode: Not Supported 00:25:00.363 Traffic Based Keep ALive: Supported 00:25:00.363 Namespace Granularity: Not Supported 00:25:00.363 SQ Associations: Not Supported 00:25:00.363 UUID List: Not Supported 00:25:00.363 Multi-Domain Subsystem: Not Supported 00:25:00.363 Fixed Capacity Management: Not Supported 00:25:00.363 Variable Capacity Management: Not Supported 00:25:00.363 Delete Endurance Group: Not Supported 00:25:00.363 Delete NVM Set: Not Supported 00:25:00.363 Extended LBA Formats Supported: Not Supported 00:25:00.363 Flexible Data Placement Supported: Not Supported 00:25:00.363 00:25:00.363 Controller Memory Buffer Support 00:25:00.363 ================================ 00:25:00.363 Supported: No 00:25:00.363 00:25:00.363 Persistent Memory Region Support 00:25:00.363 ================================ 00:25:00.363 Supported: No 00:25:00.363 00:25:00.363 Admin Command Set Attributes 00:25:00.363 ============================ 00:25:00.363 Security Send/Receive: Not Supported 00:25:00.363 Format NVM: Not Supported 00:25:00.363 Firmware Activate/Download: Not Supported 00:25:00.363 Namespace Management: Not Supported 00:25:00.363 Device Self-Test: Not Supported 00:25:00.363 Directives: Not Supported 00:25:00.363 NVMe-MI: Not Supported 00:25:00.363 Virtualization Management: Not Supported 00:25:00.363 Doorbell Buffer Config: Not Supported 00:25:00.363 Get LBA Status Capability: Not Supported 00:25:00.363 Command & Feature Lockdown Capability: Not Supported 00:25:00.363 Abort Command Limit: 4 00:25:00.363 Async Event Request Limit: 4 00:25:00.363 Number of Firmware Slots: N/A 00:25:00.363 Firmware Slot 1 Read-Only: N/A 00:25:00.363 Firmware Activation Without Reset: N/A 00:25:00.363 Multiple Update Detection Support: N/A 00:25:00.363 Firmware Update Granularity: No Information Provided 00:25:00.363 Per-Namespace SMART Log: Yes 00:25:00.363 Asymmetric Namespace Access Log Page: Supported 00:25:00.363 ANA Transition Time : 10 sec 00:25:00.363 00:25:00.363 Asymmetric Namespace Access Capabilities 00:25:00.363 ANA Optimized State : Supported 00:25:00.363 ANA Non-Optimized State : Supported 00:25:00.363 ANA Inaccessible State : Supported 00:25:00.363 ANA Persistent Loss State : Supported 00:25:00.363 ANA Change State : Supported 00:25:00.363 ANAGRPID is not changed : No 00:25:00.363 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:00.363 00:25:00.363 ANA Group Identifier Maximum : 128 00:25:00.363 Number of ANA Group Identifiers : 128 00:25:00.363 Max Number of Allowed Namespaces : 1024 00:25:00.363 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:00.363 Command Effects Log Page: Supported 00:25:00.363 Get Log Page Extended Data: Supported 00:25:00.363 Telemetry Log Pages: Not Supported 00:25:00.364 Persistent Event Log Pages: Not Supported 00:25:00.364 Supported Log Pages Log Page: May Support 00:25:00.364 Commands Supported & Effects Log Page: Not Supported 00:25:00.364 Feature Identifiers & Effects Log Page:May Support 00:25:00.364 NVMe-MI Commands & Effects Log Page: May Support 00:25:00.364 Data Area 4 for Telemetry Log: Not Supported 00:25:00.364 Error Log Page Entries Supported: 128 00:25:00.364 Keep Alive: Supported 00:25:00.364 Keep Alive Granularity: 1000 ms 00:25:00.364 00:25:00.364 NVM Command Set Attributes 00:25:00.364 ========================== 00:25:00.364 Submission Queue Entry Size 00:25:00.364 Max: 64 00:25:00.364 Min: 64 00:25:00.364 Completion Queue Entry Size 00:25:00.364 Max: 16 00:25:00.364 Min: 16 00:25:00.364 Number of Namespaces: 1024 00:25:00.364 Compare Command: Not Supported 00:25:00.364 Write Uncorrectable Command: Not Supported 00:25:00.364 Dataset Management Command: Supported 00:25:00.364 Write Zeroes Command: Supported 00:25:00.364 Set Features Save Field: Not Supported 00:25:00.364 Reservations: Not Supported 00:25:00.364 Timestamp: Not Supported 00:25:00.364 Copy: Not Supported 00:25:00.364 Volatile Write Cache: Present 00:25:00.364 Atomic Write Unit (Normal): 1 00:25:00.364 Atomic Write Unit (PFail): 1 00:25:00.364 Atomic Compare & Write Unit: 1 00:25:00.364 Fused Compare & Write: Not Supported 00:25:00.364 Scatter-Gather List 00:25:00.364 SGL Command Set: Supported 00:25:00.364 SGL Keyed: Not Supported 00:25:00.364 SGL Bit Bucket Descriptor: Not Supported 00:25:00.364 SGL Metadata Pointer: Not Supported 00:25:00.364 Oversized SGL: Not Supported 00:25:00.364 SGL Metadata Address: Not Supported 00:25:00.364 SGL Offset: Supported 00:25:00.364 Transport SGL Data Block: Not Supported 00:25:00.364 Replay Protected Memory Block: Not Supported 00:25:00.364 00:25:00.364 Firmware Slot Information 00:25:00.364 ========================= 00:25:00.364 Active slot: 0 00:25:00.364 00:25:00.364 Asymmetric Namespace Access 00:25:00.364 =========================== 00:25:00.364 Change Count : 0 00:25:00.364 Number of ANA Group Descriptors : 1 00:25:00.364 ANA Group Descriptor : 0 00:25:00.364 ANA Group ID : 1 00:25:00.364 Number of NSID Values : 1 00:25:00.364 Change Count : 0 00:25:00.364 ANA State : 1 00:25:00.364 Namespace Identifier : 1 00:25:00.364 00:25:00.364 Commands Supported and Effects 00:25:00.364 ============================== 00:25:00.364 Admin Commands 00:25:00.364 -------------- 00:25:00.364 Get Log Page (02h): Supported 00:25:00.364 Identify (06h): Supported 00:25:00.364 Abort (08h): Supported 00:25:00.364 Set Features (09h): Supported 00:25:00.364 Get Features (0Ah): Supported 00:25:00.364 Asynchronous Event Request (0Ch): Supported 00:25:00.364 Keep Alive (18h): Supported 00:25:00.364 I/O Commands 00:25:00.364 ------------ 00:25:00.364 Flush (00h): Supported 00:25:00.364 Write (01h): Supported LBA-Change 00:25:00.364 Read (02h): Supported 00:25:00.364 Write Zeroes (08h): Supported LBA-Change 00:25:00.364 Dataset Management (09h): Supported 00:25:00.364 00:25:00.364 Error Log 00:25:00.364 ========= 00:25:00.364 Entry: 0 00:25:00.364 Error Count: 0x3 00:25:00.364 Submission Queue Id: 0x0 00:25:00.364 Command Id: 0x5 00:25:00.364 Phase Bit: 0 00:25:00.364 Status Code: 0x2 00:25:00.364 Status Code Type: 0x0 00:25:00.364 Do Not Retry: 1 00:25:00.364 Error Location: 0x28 00:25:00.364 LBA: 0x0 00:25:00.364 Namespace: 0x0 00:25:00.364 Vendor Log Page: 0x0 00:25:00.364 ----------- 00:25:00.364 Entry: 1 00:25:00.364 Error Count: 0x2 00:25:00.364 Submission Queue Id: 0x0 00:25:00.364 Command Id: 0x5 00:25:00.364 Phase Bit: 0 00:25:00.364 Status Code: 0x2 00:25:00.364 Status Code Type: 0x0 00:25:00.364 Do Not Retry: 1 00:25:00.364 Error Location: 0x28 00:25:00.364 LBA: 0x0 00:25:00.364 Namespace: 0x0 00:25:00.364 Vendor Log Page: 0x0 00:25:00.364 ----------- 00:25:00.364 Entry: 2 00:25:00.364 Error Count: 0x1 00:25:00.364 Submission Queue Id: 0x0 00:25:00.364 Command Id: 0x4 00:25:00.364 Phase Bit: 0 00:25:00.364 Status Code: 0x2 00:25:00.364 Status Code Type: 0x0 00:25:00.364 Do Not Retry: 1 00:25:00.364 Error Location: 0x28 00:25:00.364 LBA: 0x0 00:25:00.364 Namespace: 0x0 00:25:00.364 Vendor Log Page: 0x0 00:25:00.364 00:25:00.364 Number of Queues 00:25:00.364 ================ 00:25:00.364 Number of I/O Submission Queues: 128 00:25:00.364 Number of I/O Completion Queues: 128 00:25:00.364 00:25:00.364 ZNS Specific Controller Data 00:25:00.364 ============================ 00:25:00.364 Zone Append Size Limit: 0 00:25:00.364 00:25:00.364 00:25:00.364 Active Namespaces 00:25:00.364 ================= 00:25:00.364 get_feature(0x05) failed 00:25:00.364 Namespace ID:1 00:25:00.364 Command Set Identifier: NVM (00h) 00:25:00.364 Deallocate: Supported 00:25:00.364 Deallocated/Unwritten Error: Not Supported 00:25:00.364 Deallocated Read Value: Unknown 00:25:00.364 Deallocate in Write Zeroes: Not Supported 00:25:00.364 Deallocated Guard Field: 0xFFFF 00:25:00.364 Flush: Supported 00:25:00.364 Reservation: Not Supported 00:25:00.364 Namespace Sharing Capabilities: Multiple Controllers 00:25:00.364 Size (in LBAs): 1953525168 (931GiB) 00:25:00.364 Capacity (in LBAs): 1953525168 (931GiB) 00:25:00.364 Utilization (in LBAs): 1953525168 (931GiB) 00:25:00.364 UUID: ea0bda32-8de3-475e-9eba-c99c315aa77e 00:25:00.364 Thin Provisioning: Not Supported 00:25:00.364 Per-NS Atomic Units: Yes 00:25:00.364 Atomic Boundary Size (Normal): 0 00:25:00.364 Atomic Boundary Size (PFail): 0 00:25:00.364 Atomic Boundary Offset: 0 00:25:00.364 NGUID/EUI64 Never Reused: No 00:25:00.364 ANA group ID: 1 00:25:00.364 Namespace Write Protected: No 00:25:00.364 Number of LBA Formats: 1 00:25:00.364 Current LBA Format: LBA Format #00 00:25:00.364 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:00.364 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.364 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.364 rmmod nvme_tcp 00:25:00.623 rmmod nvme_fabrics 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.623 10:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:02.526 10:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:05.884 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.884 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:06.451 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:06.451 00:25:06.451 real 0m16.657s 00:25:06.451 user 0m4.374s 00:25:06.451 sys 0m8.703s 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.451 ************************************ 00:25:06.451 END TEST nvmf_identify_kernel_target 00:25:06.451 ************************************ 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.451 ************************************ 00:25:06.451 START TEST nvmf_auth_host 00:25:06.451 ************************************ 00:25:06.451 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:06.710 * Looking for test storage... 00:25:06.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:06.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.710 --rc genhtml_branch_coverage=1 00:25:06.710 --rc genhtml_function_coverage=1 00:25:06.710 --rc genhtml_legend=1 00:25:06.710 --rc geninfo_all_blocks=1 00:25:06.710 --rc geninfo_unexecuted_blocks=1 00:25:06.710 00:25:06.710 ' 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:06.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.710 --rc genhtml_branch_coverage=1 00:25:06.710 --rc genhtml_function_coverage=1 00:25:06.710 --rc genhtml_legend=1 00:25:06.710 --rc geninfo_all_blocks=1 00:25:06.710 --rc geninfo_unexecuted_blocks=1 00:25:06.710 00:25:06.710 ' 00:25:06.710 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:06.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.710 --rc genhtml_branch_coverage=1 00:25:06.710 --rc genhtml_function_coverage=1 00:25:06.710 --rc genhtml_legend=1 00:25:06.711 --rc geninfo_all_blocks=1 00:25:06.711 --rc geninfo_unexecuted_blocks=1 00:25:06.711 00:25:06.711 ' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:06.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.711 --rc genhtml_branch_coverage=1 00:25:06.711 --rc genhtml_function_coverage=1 00:25:06.711 --rc genhtml_legend=1 00:25:06.711 --rc geninfo_all_blocks=1 00:25:06.711 --rc geninfo_unexecuted_blocks=1 00:25:06.711 00:25:06.711 ' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.711 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.280 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:13.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:13.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:13.281 Found net devices under 0000:86:00.0: cvl_0_0 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:13.281 Found net devices under 0000:86:00.1: cvl_0_1 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:25:13.281 00:25:13.281 --- 10.0.0.2 ping statistics --- 00:25:13.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.281 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:13.281 00:25:13.281 --- 10.0.0.1 ping statistics --- 00:25:13.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.281 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3617424 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3617424 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3617424 ']' 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.281 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f2a1cf1c66b989d9acd03b6007cc7fe 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kof 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f2a1cf1c66b989d9acd03b6007cc7fe 0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f2a1cf1c66b989d9acd03b6007cc7fe 0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f2a1cf1c66b989d9acd03b6007cc7fe 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kof 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kof 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kof 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5243c08fd27eb680932fb2af510abfeb8f7185c7c799e0ba82f267e6b91c0e17 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O5Z 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5243c08fd27eb680932fb2af510abfeb8f7185c7c799e0ba82f267e6b91c0e17 3 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5243c08fd27eb680932fb2af510abfeb8f7185c7c799e0ba82f267e6b91c0e17 3 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5243c08fd27eb680932fb2af510abfeb8f7185c7c799e0ba82f267e6b91c0e17 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O5Z 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O5Z 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.O5Z 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf0bd7e962d4981907d35d922aabfc634db59986eda8d03f 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3eA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf0bd7e962d4981907d35d922aabfc634db59986eda8d03f 0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf0bd7e962d4981907d35d922aabfc634db59986eda8d03f 0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf0bd7e962d4981907d35d922aabfc634db59986eda8d03f 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3eA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3eA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3eA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e65130aaaeb03e6a5423de349a0c0a7c46651f4e77c216e 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gbA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e65130aaaeb03e6a5423de349a0c0a7c46651f4e77c216e 2 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e65130aaaeb03e6a5423de349a0c0a7c46651f4e77c216e 2 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e65130aaaeb03e6a5423de349a0c0a7c46651f4e77c216e 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gbA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gbA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gbA 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.282 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc3cf1cef64b7063ce2cc4036e8caa3a 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HrZ 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc3cf1cef64b7063ce2cc4036e8caa3a 1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc3cf1cef64b7063ce2cc4036e8caa3a 1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc3cf1cef64b7063ce2cc4036e8caa3a 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HrZ 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HrZ 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HrZ 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=75a237437d591863c3583e7c54253a8d 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mg7 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 75a237437d591863c3583e7c54253a8d 1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 75a237437d591863c3583e7c54253a8d 1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=75a237437d591863c3583e7c54253a8d 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mg7 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mg7 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mg7 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.283 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43627311da8d70eaa6d0083f9ea4042e4db58a0e9449725a 00:25:13.283 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:13.283 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WYA 00:25:13.283 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43627311da8d70eaa6d0083f9ea4042e4db58a0e9449725a 2 00:25:13.283 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43627311da8d70eaa6d0083f9ea4042e4db58a0e9449725a 2 00:25:13.283 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.283 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43627311da8d70eaa6d0083f9ea4042e4db58a0e9449725a 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WYA 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WYA 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WYA 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=489bf63de4386caa8f072481eac3e40c 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.G1u 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 489bf63de4386caa8f072481eac3e40c 0 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 489bf63de4386caa8f072481eac3e40c 0 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=489bf63de4386caa8f072481eac3e40c 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.G1u 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.G1u 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.G1u 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8f9e7047ae8a337a125548aee2dffe33c34e947e41d1ac8aed41386ebeb81dff 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NOu 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8f9e7047ae8a337a125548aee2dffe33c34e947e41d1ac8aed41386ebeb81dff 3 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8f9e7047ae8a337a125548aee2dffe33c34e947e41d1ac8aed41386ebeb81dff 3 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8f9e7047ae8a337a125548aee2dffe33c34e947e41d1ac8aed41386ebeb81dff 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.542 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NOu 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NOu 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NOu 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3617424 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3617424 ']' 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.543 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kof 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.O5Z ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O5Z 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3eA 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gbA ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gbA 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HrZ 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mg7 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mg7 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WYA 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.G1u ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.G1u 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NOu 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:13.803 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:17.091 Waiting for block devices as requested 00:25:17.092 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:17.092 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:17.092 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:17.350 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.350 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.350 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.350 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.608 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.608 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:17.608 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:17.608 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:18.175 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.175 No valid GPT data, bailing 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:18.434 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:18.434 00:25:18.434 Discovery Log Number of Records 2, Generation counter 2 00:25:18.434 =====Discovery Log Entry 0====== 00:25:18.434 trtype: tcp 00:25:18.434 adrfam: ipv4 00:25:18.434 subtype: current discovery subsystem 00:25:18.434 treq: not specified, sq flow control disable supported 00:25:18.434 portid: 1 00:25:18.434 trsvcid: 4420 00:25:18.434 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:18.434 traddr: 10.0.0.1 00:25:18.434 eflags: none 00:25:18.434 sectype: none 00:25:18.434 =====Discovery Log Entry 1====== 00:25:18.434 trtype: tcp 00:25:18.434 adrfam: ipv4 00:25:18.434 subtype: nvme subsystem 00:25:18.434 treq: not specified, sq flow control disable supported 00:25:18.434 portid: 1 00:25:18.434 trsvcid: 4420 00:25:18.434 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:18.434 traddr: 10.0.0.1 00:25:18.434 eflags: none 00:25:18.434 sectype: none 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.434 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.435 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.435 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.435 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.692 nvme0n1 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.692 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.950 nvme0n1 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.950 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.951 nvme0n1 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.951 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.208 nvme0n1 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.208 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:19.466 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:19.466 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.467 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 nvme0n1 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.467 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.725 nvme0n1 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.725 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.726 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.984 nvme0n1 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:19.984 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.985 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.243 nvme0n1 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.243 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.244 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.502 nvme0n1 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.502 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.760 nvme0n1 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:20.760 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.761 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.019 nvme0n1 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.019 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.278 nvme0n1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.278 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.537 nvme0n1 00:25:21.537 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.537 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.537 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.537 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.537 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.537 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.796 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.055 nvme0n1 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.055 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.314 nvme0n1 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.314 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.315 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.573 nvme0n1 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.573 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.574 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.142 nvme0n1 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.142 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.143 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.401 nvme0n1 00:25:23.401 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.401 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.401 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.401 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.401 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.401 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.919 nvme0n1 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.919 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.486 nvme0n1 00:25:24.486 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.486 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.486 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.486 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.486 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.486 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.487 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.745 nvme0n1 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.745 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.005 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.573 nvme0n1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.573 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.140 nvme0n1 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.140 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.706 nvme0n1 00:25:26.706 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.964 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.965 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.531 nvme0n1 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.531 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.532 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.098 nvme0n1 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.098 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.357 nvme0n1 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.357 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.357 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.357 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.357 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.357 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.357 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.357 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.358 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.625 nvme0n1 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.625 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.883 nvme0n1 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.883 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.884 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.884 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.884 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.884 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.884 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.143 nvme0n1 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.143 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 nvme0n1 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.402 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.674 nvme0n1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.674 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.967 nvme0n1 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.967 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.968 nvme0n1 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.968 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.255 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.256 nvme0n1 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.256 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.540 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.541 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.541 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.541 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.541 nvme0n1 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.541 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.799 nvme0n1 00:25:30.799 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.799 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.799 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.799 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.799 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.799 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.057 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.058 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.316 nvme0n1 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.316 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.317 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.576 nvme0n1 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.576 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.836 nvme0n1 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.836 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.095 nvme0n1 00:25:32.095 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.095 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.095 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.095 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.095 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.095 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.354 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.613 nvme0n1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.613 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.181 nvme0n1 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.181 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.439 nvme0n1 00:25:33.439 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.439 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.439 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.439 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.439 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.439 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.698 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.957 nvme0n1 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.957 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.958 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.958 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.958 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.958 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.958 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.958 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.525 nvme0n1 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.525 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.526 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.094 nvme0n1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.094 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.662 nvme0n1 00:25:35.662 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.662 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.662 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.662 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.662 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.662 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.921 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.488 nvme0n1 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.489 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.055 nvme0n1 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.055 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.313 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.880 nvme0n1 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.880 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.881 nvme0n1 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.881 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.140 nvme0n1 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.140 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.400 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.400 nvme0n1 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.659 nvme0n1 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.659 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.917 nvme0n1 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.917 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.918 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.176 nvme0n1 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.176 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.177 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.437 nvme0n1 00:25:39.437 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.437 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.437 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.437 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.437 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.437 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.695 nvme0n1 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.695 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.696 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 nvme0n1 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.954 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.213 nvme0n1 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.213 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.214 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.472 nvme0n1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.472 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 nvme0n1 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.999 nvme0n1 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.258 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.516 nvme0n1 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.516 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.775 nvme0n1 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.775 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.342 nvme0n1 00:25:42.342 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.342 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.342 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.601 nvme0n1 00:25:42.601 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.601 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.601 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.601 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.601 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.601 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.859 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.860 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.118 nvme0n1 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.118 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.685 nvme0n1 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.685 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.686 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.944 nvme0n1 00:25:43.944 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.944 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.944 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.944 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.944 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYyYTFjZjFjNjZiOTg5ZDlhY2QwM2I2MDA3Y2M3ZmX+pzS3: 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI0M2MwOGZkMjdlYjY4MDkzMmZiMmFmNTEwYWJmZWI4ZjcxODVjN2M3OTllMGJhODJmMjY3ZTZiOTFjMGUxN4lZKTE=: 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.203 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.770 nvme0n1 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.770 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.771 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.339 nvme0n1 00:25:45.339 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.339 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.339 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.339 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.340 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.340 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.340 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.277 nvme0n1 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDM2MjczMTFkYThkNzBlYWE2ZDAwODNmOWVhNDA0MmU0ZGI1OGEwZTk0NDk3MjVhbfo3Rg==: 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: ]] 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg5YmY2M2RlNDM4NmNhYThmMDcyNDgxZWFjM2U0MGNjrql1: 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.277 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.278 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.844 nvme0n1 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.844 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGY5ZTcwNDdhZThhMzM3YTEyNTU0OGFlZTJkZmZlMzNjMzRlOTQ3ZTQxZDFhYzhhZWQ0MTM4NmViZWI4MWRmZriFFQE=: 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.845 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.412 nvme0n1 00:25:47.412 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.412 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.412 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.412 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.412 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.412 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.412 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.413 request: 00:25:47.413 { 00:25:47.413 "name": "nvme0", 00:25:47.413 "trtype": "tcp", 00:25:47.413 "traddr": "10.0.0.1", 00:25:47.413 "adrfam": "ipv4", 00:25:47.413 "trsvcid": "4420", 00:25:47.413 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.413 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.413 "prchk_reftag": false, 00:25:47.413 "prchk_guard": false, 00:25:47.413 "hdgst": false, 00:25:47.413 "ddgst": false, 00:25:47.413 "allow_unrecognized_csi": false, 00:25:47.413 "method": "bdev_nvme_attach_controller", 00:25:47.413 "req_id": 1 00:25:47.413 } 00:25:47.413 Got JSON-RPC error response 00:25:47.413 response: 00:25:47.413 { 00:25:47.413 "code": -5, 00:25:47.413 "message": "Input/output error" 00:25:47.413 } 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.413 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.672 request: 00:25:47.672 { 00:25:47.672 "name": "nvme0", 00:25:47.672 "trtype": "tcp", 00:25:47.672 "traddr": "10.0.0.1", 00:25:47.672 "adrfam": "ipv4", 00:25:47.672 "trsvcid": "4420", 00:25:47.672 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.672 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.672 "prchk_reftag": false, 00:25:47.672 "prchk_guard": false, 00:25:47.672 "hdgst": false, 00:25:47.672 "ddgst": false, 00:25:47.672 "dhchap_key": "key2", 00:25:47.672 "allow_unrecognized_csi": false, 00:25:47.672 "method": "bdev_nvme_attach_controller", 00:25:47.672 "req_id": 1 00:25:47.672 } 00:25:47.672 Got JSON-RPC error response 00:25:47.672 response: 00:25:47.672 { 00:25:47.672 "code": -5, 00:25:47.672 "message": "Input/output error" 00:25:47.672 } 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.672 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.673 request: 00:25:47.673 { 00:25:47.673 "name": "nvme0", 00:25:47.673 "trtype": "tcp", 00:25:47.673 "traddr": "10.0.0.1", 00:25:47.673 "adrfam": "ipv4", 00:25:47.673 "trsvcid": "4420", 00:25:47.673 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.673 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.673 "prchk_reftag": false, 00:25:47.673 "prchk_guard": false, 00:25:47.673 "hdgst": false, 00:25:47.673 "ddgst": false, 00:25:47.673 "dhchap_key": "key1", 00:25:47.673 "dhchap_ctrlr_key": "ckey2", 00:25:47.673 "allow_unrecognized_csi": false, 00:25:47.673 "method": "bdev_nvme_attach_controller", 00:25:47.673 "req_id": 1 00:25:47.673 } 00:25:47.673 Got JSON-RPC error response 00:25:47.673 response: 00:25:47.673 { 00:25:47.673 "code": -5, 00:25:47.673 "message": "Input/output error" 00:25:47.673 } 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.673 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.931 nvme0n1 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.931 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.932 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.932 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.190 request: 00:25:48.190 { 00:25:48.190 "name": "nvme0", 00:25:48.190 "dhchap_key": "key1", 00:25:48.190 "dhchap_ctrlr_key": "ckey2", 00:25:48.190 "method": "bdev_nvme_set_keys", 00:25:48.190 "req_id": 1 00:25:48.190 } 00:25:48.190 Got JSON-RPC error response 00:25:48.190 response: 00:25:48.190 { 00:25:48.190 "code": -13, 00:25:48.190 "message": "Permission denied" 00:25:48.190 } 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:48.190 10:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:49.125 10:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:50.059 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.059 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:50.059 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.059 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.059 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYwYmQ3ZTk2MmQ0OTgxOTA3ZDM1ZDkyMmFhYmZjNjM0ZGI1OTk4NmVkYThkMDNmJPCMxA==: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NTEzMGFhYWViMDNlNmE1NDIzZGUzNDlhMGMwYTdjNDY2NTFmNGU3N2MyMTZlYp/cnQ==: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.318 nvme0n1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMzY2YxY2VmNjRiNzA2M2NlMmNjNDAzNmU4Y2FhM2EtpKIn: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: ]] 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzVhMjM3NDM3ZDU5MTg2M2MzNTgzZTdjNTQyNTNhOGS2Gvme: 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.318 10:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.318 request: 00:25:50.318 { 00:25:50.318 "name": "nvme0", 00:25:50.318 "dhchap_key": "key2", 00:25:50.318 "dhchap_ctrlr_key": "ckey1", 00:25:50.318 "method": "bdev_nvme_set_keys", 00:25:50.318 "req_id": 1 00:25:50.318 } 00:25:50.318 Got JSON-RPC error response 00:25:50.318 response: 00:25:50.318 { 00:25:50.318 "code": -13, 00:25:50.318 "message": "Permission denied" 00:25:50.318 } 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.318 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.576 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.576 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:50.576 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.510 rmmod nvme_tcp 00:25:51.510 rmmod nvme_fabrics 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3617424 ']' 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3617424 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3617424 ']' 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3617424 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.510 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3617424 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3617424' 00:25:51.770 killing process with pid 3617424 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3617424 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3617424 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.770 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:54.305 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:56.840 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:56.840 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:56.841 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:56.841 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:56.841 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:56.841 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:56.841 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:57.819 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:57.819 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kof /tmp/spdk.key-null.3eA /tmp/spdk.key-sha256.HrZ /tmp/spdk.key-sha384.WYA /tmp/spdk.key-sha512.NOu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:57.819 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:01.111 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:01.111 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:01.111 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:01.111 00:26:01.111 real 0m54.112s 00:26:01.111 user 0m48.695s 00:26:01.111 sys 0m12.807s 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.111 ************************************ 00:26:01.111 END TEST nvmf_auth_host 00:26:01.111 ************************************ 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.111 ************************************ 00:26:01.111 START TEST nvmf_digest 00:26:01.111 ************************************ 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:01.111 * Looking for test storage... 00:26:01.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:01.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.111 --rc genhtml_branch_coverage=1 00:26:01.111 --rc genhtml_function_coverage=1 00:26:01.111 --rc genhtml_legend=1 00:26:01.111 --rc geninfo_all_blocks=1 00:26:01.111 --rc geninfo_unexecuted_blocks=1 00:26:01.111 00:26:01.111 ' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:01.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.111 --rc genhtml_branch_coverage=1 00:26:01.111 --rc genhtml_function_coverage=1 00:26:01.111 --rc genhtml_legend=1 00:26:01.111 --rc geninfo_all_blocks=1 00:26:01.111 --rc geninfo_unexecuted_blocks=1 00:26:01.111 00:26:01.111 ' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:01.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.111 --rc genhtml_branch_coverage=1 00:26:01.111 --rc genhtml_function_coverage=1 00:26:01.111 --rc genhtml_legend=1 00:26:01.111 --rc geninfo_all_blocks=1 00:26:01.111 --rc geninfo_unexecuted_blocks=1 00:26:01.111 00:26:01.111 ' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:01.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.111 --rc genhtml_branch_coverage=1 00:26:01.111 --rc genhtml_function_coverage=1 00:26:01.111 --rc genhtml_legend=1 00:26:01.111 --rc geninfo_all_blocks=1 00:26:01.111 --rc geninfo_unexecuted_blocks=1 00:26:01.111 00:26:01.111 ' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.111 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.112 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:07.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:07.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:07.680 Found net devices under 0000:86:00.0: cvl_0_0 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:07.680 Found net devices under 0000:86:00.1: cvl_0_1 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.680 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:26:07.680 00:26:07.681 --- 10.0.0.2 ping statistics --- 00:26:07.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.681 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:26:07.681 00:26:07.681 --- 10.0.0.1 ping statistics --- 00:26:07.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.681 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.681 ************************************ 00:26:07.681 START TEST nvmf_digest_clean 00:26:07.681 ************************************ 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3631175 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3631175 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3631175 ']' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.681 [2024-11-20 10:43:07.564408] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:07.681 [2024-11-20 10:43:07.564459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.681 [2024-11-20 10:43:07.649549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.681 [2024-11-20 10:43:07.692741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.681 [2024-11-20 10:43:07.692777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.681 [2024-11-20 10:43:07.692785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.681 [2024-11-20 10:43:07.692794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.681 [2024-11-20 10:43:07.692799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.681 [2024-11-20 10:43:07.693375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.681 null0 00:26:07.681 [2024-11-20 10:43:07.853932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.681 [2024-11-20 10:43:07.878155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3631207 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3631207 /var/tmp/bperf.sock 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3631207 ']' 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.681 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.681 [2024-11-20 10:43:07.930707] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:07.681 [2024-11-20 10:43:07.930751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631207 ] 00:26:07.681 [2024-11-20 10:43:07.991729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.681 [2024-11-20 10:43:08.035160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.681 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.940 nvme0n1 00:26:07.940 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:07.940 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.240 Running I/O for 2 seconds... 00:26:10.152 23716.00 IOPS, 92.64 MiB/s [2024-11-20T09:43:10.883Z] 24709.00 IOPS, 96.52 MiB/s 00:26:10.152 Latency(us) 00:26:10.152 [2024-11-20T09:43:10.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.152 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:10.152 nvme0n1 : 2.04 24246.57 94.71 0.00 0.00 5171.67 2692.67 43766.65 00:26:10.152 [2024-11-20T09:43:10.883Z] =================================================================================================================== 00:26:10.152 [2024-11-20T09:43:10.883Z] Total : 24246.57 94.71 0.00 0.00 5171.67 2692.67 43766.65 00:26:10.152 { 00:26:10.152 "results": [ 00:26:10.152 { 00:26:10.152 "job": "nvme0n1", 00:26:10.152 "core_mask": "0x2", 00:26:10.152 "workload": "randread", 00:26:10.152 "status": "finished", 00:26:10.152 "queue_depth": 128, 00:26:10.152 "io_size": 4096, 00:26:10.152 "runtime": 2.043423, 00:26:10.152 "iops": 24246.57058279172, 00:26:10.152 "mibps": 94.71316633903015, 00:26:10.152 "io_failed": 0, 00:26:10.152 "io_timeout": 0, 00:26:10.152 "avg_latency_us": 5171.671240674016, 00:26:10.152 "min_latency_us": 2692.6747826086958, 00:26:10.152 "max_latency_us": 43766.65043478261 00:26:10.152 } 00:26:10.152 ], 00:26:10.152 "core_count": 1 00:26:10.152 } 00:26:10.152 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:10.152 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:10.152 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:10.152 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:10.152 | select(.opcode=="crc32c") 00:26:10.152 | "\(.module_name) \(.executed)"' 00:26:10.152 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:10.411 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:10.411 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:10.411 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3631207 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3631207 ']' 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3631207 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.412 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3631207 00:26:10.412 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.412 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.412 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3631207' 00:26:10.412 killing process with pid 3631207 00:26:10.412 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3631207 00:26:10.412 Received shutdown signal, test time was about 2.000000 seconds 00:26:10.412 00:26:10.412 Latency(us) 00:26:10.412 [2024-11-20T09:43:11.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.412 [2024-11-20T09:43:11.143Z] =================================================================================================================== 00:26:10.412 [2024-11-20T09:43:11.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.412 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3631207 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3631682 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3631682 /var/tmp/bperf.sock 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3631682 ']' 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:10.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.671 [2024-11-20 10:43:11.214481] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:10.671 [2024-11-20 10:43:11.214532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631682 ] 00:26:10.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.671 Zero copy mechanism will not be used. 00:26:10.671 [2024-11-20 10:43:11.290141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.671 [2024-11-20 10:43:11.328044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:10.671 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.930 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.930 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.498 nvme0n1 00:26:11.498 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:11.498 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:11.498 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.498 Zero copy mechanism will not be used. 00:26:11.498 Running I/O for 2 seconds... 00:26:13.369 5746.00 IOPS, 718.25 MiB/s [2024-11-20T09:43:14.100Z] 5728.00 IOPS, 716.00 MiB/s 00:26:13.369 Latency(us) 00:26:13.369 [2024-11-20T09:43:14.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.369 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:13.369 nvme0n1 : 2.00 5724.69 715.59 0.00 0.00 2792.38 651.80 10713.71 00:26:13.369 [2024-11-20T09:43:14.100Z] =================================================================================================================== 00:26:13.369 [2024-11-20T09:43:14.100Z] Total : 5724.69 715.59 0.00 0.00 2792.38 651.80 10713.71 00:26:13.369 { 00:26:13.369 "results": [ 00:26:13.369 { 00:26:13.369 "job": "nvme0n1", 00:26:13.369 "core_mask": "0x2", 00:26:13.369 "workload": "randread", 00:26:13.369 "status": "finished", 00:26:13.369 "queue_depth": 16, 00:26:13.369 "io_size": 131072, 00:26:13.369 "runtime": 2.003952, 00:26:13.369 "iops": 5724.688016479437, 00:26:13.369 "mibps": 715.5860020599296, 00:26:13.369 "io_failed": 0, 00:26:13.369 "io_timeout": 0, 00:26:13.369 "avg_latency_us": 2792.377553817234, 00:26:13.369 "min_latency_us": 651.7982608695652, 00:26:13.369 "max_latency_us": 10713.711304347826 00:26:13.369 } 00:26:13.369 ], 00:26:13.369 "core_count": 1 00:26:13.369 } 00:26:13.369 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:13.369 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:13.369 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:13.369 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:13.369 | select(.opcode=="crc32c") 00:26:13.369 | "\(.module_name) \(.executed)"' 00:26:13.369 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:13.627 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:13.627 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:13.627 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3631682 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3631682 ']' 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3631682 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3631682 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3631682' 00:26:13.628 killing process with pid 3631682 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3631682 00:26:13.628 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.628 00:26:13.628 Latency(us) 00:26:13.628 [2024-11-20T09:43:14.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.628 [2024-11-20T09:43:14.359Z] =================================================================================================================== 00:26:13.628 [2024-11-20T09:43:14.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.628 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3631682 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3632320 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3632320 /var/tmp/bperf.sock 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3632320 ']' 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.887 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.887 [2024-11-20 10:43:14.547506] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:13.887 [2024-11-20 10:43:14.547556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632320 ] 00:26:14.146 [2024-11-20 10:43:14.622220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.146 [2024-11-20 10:43:14.664826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.146 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.146 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:14.146 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.146 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.146 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.405 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.405 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.664 nvme0n1 00:26:14.664 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:14.664 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.923 Running I/O for 2 seconds... 00:26:16.795 26854.00 IOPS, 104.90 MiB/s [2024-11-20T09:43:17.526Z] 26899.00 IOPS, 105.07 MiB/s 00:26:16.795 Latency(us) 00:26:16.795 [2024-11-20T09:43:17.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.795 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:16.795 nvme0n1 : 2.01 26901.84 105.09 0.00 0.00 4749.60 1923.34 6468.12 00:26:16.795 [2024-11-20T09:43:17.526Z] =================================================================================================================== 00:26:16.795 [2024-11-20T09:43:17.526Z] Total : 26901.84 105.09 0.00 0.00 4749.60 1923.34 6468.12 00:26:16.795 { 00:26:16.795 "results": [ 00:26:16.795 { 00:26:16.795 "job": "nvme0n1", 00:26:16.795 "core_mask": "0x2", 00:26:16.795 "workload": "randwrite", 00:26:16.795 "status": "finished", 00:26:16.795 "queue_depth": 128, 00:26:16.795 "io_size": 4096, 00:26:16.795 "runtime": 2.006034, 00:26:16.795 "iops": 26901.837157296435, 00:26:16.795 "mibps": 105.0853013956892, 00:26:16.795 "io_failed": 0, 00:26:16.795 "io_timeout": 0, 00:26:16.795 "avg_latency_us": 4749.596752351319, 00:26:16.795 "min_latency_us": 1923.3391304347826, 00:26:16.795 "max_latency_us": 6468.118260869565 00:26:16.795 } 00:26:16.795 ], 00:26:16.795 "core_count": 1 00:26:16.795 } 00:26:16.795 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:16.795 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:16.796 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.796 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.796 | select(.opcode=="crc32c") 00:26:16.796 | "\(.module_name) \(.executed)"' 00:26:16.796 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3632320 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3632320 ']' 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3632320 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3632320 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3632320' 00:26:17.054 killing process with pid 3632320 00:26:17.054 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3632320 00:26:17.054 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.054 00:26:17.054 Latency(us) 00:26:17.054 [2024-11-20T09:43:17.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.054 [2024-11-20T09:43:17.785Z] =================================================================================================================== 00:26:17.054 [2024-11-20T09:43:17.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.055 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3632320 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3632843 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3632843 /var/tmp/bperf.sock 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3632843 ']' 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.313 10:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.313 [2024-11-20 10:43:17.952015] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:17.313 [2024-11-20 10:43:17.952062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632843 ] 00:26:17.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.313 Zero copy mechanism will not be used. 00:26:17.313 [2024-11-20 10:43:18.027630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.572 [2024-11-20 10:43:18.066139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.572 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.572 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:17.572 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:17.572 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:17.572 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:17.830 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.830 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.089 nvme0n1 00:26:18.089 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:18.089 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.089 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.089 Zero copy mechanism will not be used. 00:26:18.089 Running I/O for 2 seconds... 00:26:20.398 6732.00 IOPS, 841.50 MiB/s [2024-11-20T09:43:21.129Z] 6487.50 IOPS, 810.94 MiB/s 00:26:20.398 Latency(us) 00:26:20.398 [2024-11-20T09:43:21.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.398 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:20.399 nvme0n1 : 2.00 6484.99 810.62 0.00 0.00 2462.99 1196.74 4502.04 00:26:20.399 [2024-11-20T09:43:21.130Z] =================================================================================================================== 00:26:20.399 [2024-11-20T09:43:21.130Z] Total : 6484.99 810.62 0.00 0.00 2462.99 1196.74 4502.04 00:26:20.399 { 00:26:20.399 "results": [ 00:26:20.399 { 00:26:20.399 "job": "nvme0n1", 00:26:20.399 "core_mask": "0x2", 00:26:20.399 "workload": "randwrite", 00:26:20.399 "status": "finished", 00:26:20.399 "queue_depth": 16, 00:26:20.399 "io_size": 131072, 00:26:20.399 "runtime": 2.003549, 00:26:20.399 "iops": 6484.992381019881, 00:26:20.399 "mibps": 810.6240476274851, 00:26:20.399 "io_failed": 0, 00:26:20.399 "io_timeout": 0, 00:26:20.399 "avg_latency_us": 2462.9871544209423, 00:26:20.399 "min_latency_us": 1196.744347826087, 00:26:20.399 "max_latency_us": 4502.038260869565 00:26:20.399 } 00:26:20.399 ], 00:26:20.399 "core_count": 1 00:26:20.399 } 00:26:20.399 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:20.399 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:20.399 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.399 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.399 | select(.opcode=="crc32c") 00:26:20.399 | "\(.module_name) \(.executed)"' 00:26:20.399 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3632843 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3632843 ']' 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3632843 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3632843 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3632843' 00:26:20.399 killing process with pid 3632843 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3632843 00:26:20.399 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.399 00:26:20.399 Latency(us) 00:26:20.399 [2024-11-20T09:43:21.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.399 [2024-11-20T09:43:21.130Z] =================================================================================================================== 00:26:20.399 [2024-11-20T09:43:21.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.399 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3632843 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3631175 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3631175 ']' 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3631175 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3631175 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3631175' 00:26:20.658 killing process with pid 3631175 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3631175 00:26:20.658 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3631175 00:26:20.916 00:26:20.916 real 0m13.930s 00:26:20.916 user 0m26.644s 00:26:20.916 sys 0m4.616s 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.916 ************************************ 00:26:20.916 END TEST nvmf_digest_clean 00:26:20.916 ************************************ 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:20.916 ************************************ 00:26:20.916 START TEST nvmf_digest_error 00:26:20.916 ************************************ 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3633456 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3633456 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3633456 ']' 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.916 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.916 [2024-11-20 10:43:21.566112] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:20.916 [2024-11-20 10:43:21.566157] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.175 [2024-11-20 10:43:21.646061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.175 [2024-11-20 10:43:21.687331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.175 [2024-11-20 10:43:21.687368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.175 [2024-11-20 10:43:21.687376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.175 [2024-11-20 10:43:21.687382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.175 [2024-11-20 10:43:21.687387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.175 [2024-11-20 10:43:21.687963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.175 [2024-11-20 10:43:21.760419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.175 null0 00:26:21.175 [2024-11-20 10:43:21.855925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.175 [2024-11-20 10:43:21.880125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:21.175 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3633577 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3633577 /var/tmp/bperf.sock 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3633577 ']' 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.176 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.434 [2024-11-20 10:43:21.934244] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:21.434 [2024-11-20 10:43:21.934287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633577 ] 00:26:21.434 [2024-11-20 10:43:22.009751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.434 [2024-11-20 10:43:22.050364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:21.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.692 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:21.692 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.692 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.692 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.692 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.692 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.260 nvme0n1 00:26:22.260 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:22.260 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.260 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.260 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.260 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:22.260 10:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.260 Running I/O for 2 seconds... 00:26:22.260 [2024-11-20 10:43:22.852528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.852562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.852573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.864461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.864487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.864497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.875785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.875808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.875817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.883875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.883897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.883905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.894943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.894972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.894980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.903670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.903691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.903699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.915279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.915301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.915310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.925413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.925434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.925443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.936236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.936258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.936266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.948596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.948618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.957496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.957518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.957526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.969344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.969366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.260 [2024-11-20 10:43:22.969375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.260 [2024-11-20 10:43:22.982310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.260 [2024-11-20 10:43:22.982333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.261 [2024-11-20 10:43:22.982344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:22.995258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:22.995280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:22.995288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.003147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.003169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.003177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.014711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.014733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.014741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.027393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.027415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.027424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.038647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.038677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.047679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.047709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.057181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.057202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.057211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.068504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.068526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.068535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.077019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.077040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.077048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.086354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.086376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.086384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.098291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.098312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.098321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.106760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.106781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.106790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.117308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.520 [2024-11-20 10:43:23.117329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.520 [2024-11-20 10:43:23.117338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.520 [2024-11-20 10:43:23.128225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.128246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.137049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.137069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.137078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.146809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.146830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.146839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.156155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.167525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.167546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.167554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.177838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.177858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.177867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.188081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.188101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.188109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.198038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.198060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.198068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.206180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.206201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.206209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.216649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.216669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.216677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.229318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.229339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.229347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.521 [2024-11-20 10:43:23.241408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.521 [2024-11-20 10:43:23.241429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.521 [2024-11-20 10:43:23.241437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.254340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.254368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.254377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.262813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.262834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.262842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.274549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.274569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.274577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.283304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.283324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.283332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.294942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.294976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.306588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.306609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.306617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.314980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.315000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.315008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.326505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.326525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.326533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.338538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.338559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.780 [2024-11-20 10:43:23.338567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.780 [2024-11-20 10:43:23.350778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.780 [2024-11-20 10:43:23.350798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.350806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.361228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.361248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.361256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.370855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.370875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.370884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.380855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.380876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.380884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.390632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.390652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.390660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.399181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.399202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.399210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.409927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.409952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.409961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.418982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.419002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.419010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.429152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.429173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.429185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.441682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.441704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.441713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.454508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.454529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.454538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.462840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.462860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.473298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.473317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.473324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.483715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.483736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.483744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.493440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.493461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.493469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.781 [2024-11-20 10:43:23.501781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:22.781 [2024-11-20 10:43:23.501801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.781 [2024-11-20 10:43:23.501810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.513704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.513726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.513734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.522800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.522821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.522829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.532314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.532334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.532342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.541772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.541792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.541800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.551270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.551291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.551299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.561362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.561383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.561391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.570718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.570739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.570747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.580172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.580192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.580200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.589485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.589506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.589514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.598207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.598226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.598237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.607500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.607520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.607528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.616863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.616883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.616891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.626348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.626369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.626377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.635587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.635607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.635616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.644986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.645005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.654193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.654222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.664130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.664160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.674160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.674180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.674188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.685268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.685293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.685302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.694409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.694430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.694438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.704181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.704202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.704211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.713795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.713816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.713824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.722974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.722994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.723002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.732591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.732610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.732618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.741655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.741676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.753269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.753290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.753298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.040 [2024-11-20 10:43:23.762549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.040 [2024-11-20 10:43:23.762570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-11-20 10:43:23.762579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.774588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.774614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.774622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.787408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.787429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.787437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.799730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.799751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.799759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.812622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.812643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.812652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.821213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.821234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.821242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.831287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.831307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.831315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 24707.00 IOPS, 96.51 MiB/s [2024-11-20T09:43:24.029Z] [2024-11-20 10:43:23.842966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.842986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.842994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.853194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.853215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.853223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.863264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.863285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.863297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.871689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.871710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.871719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.881666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.881687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.881695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.892494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.892515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.892523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.298 [2024-11-20 10:43:23.905434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.298 [2024-11-20 10:43:23.905456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.298 [2024-11-20 10:43:23.905465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.916380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.916401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.916409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.925134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.925155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.925163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.935062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.935087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.935097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.944647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.944668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.944676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.954692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.954713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.954721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.964255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.964276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.964284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.973331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.973351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.973360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.982724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.982745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.982753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:23.992215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:23.992236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:23.992243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:24.001524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:24.001545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:24.001553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:24.011237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:24.011258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:24.011266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-20 10:43:24.020589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.299 [2024-11-20 10:43:24.020610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-20 10:43:24.020619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 10:43:24.031644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.557 [2024-11-20 10:43:24.031665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.557 [2024-11-20 10:43:24.031677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 10:43:24.043971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.043992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.044000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.052699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.052721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.052730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.064990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.065011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.065020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.077381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.077402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.077410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.088809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.088838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.097516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.097537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.097545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.109955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.109976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.109985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.122443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.122464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.122472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.134985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.135013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.135021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.146927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.146953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.146962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.154998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.155020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.155029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.166434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.166457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.166465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.175027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.175049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.175058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.187688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.187710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.187734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.199978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.199999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.200007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.208277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.208297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.208305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.219845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.219866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.231841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.231862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.231870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.240826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.240847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.240855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.252773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.252794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.252802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.265157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.265179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.265187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.273454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.273474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.273483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 10:43:24.283715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.558 [2024-11-20 10:43:24.283735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 10:43:24.283744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.295157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.295179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.295187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.306846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.306868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.306876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.317556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.317577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.317589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.330439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.330460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.330469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.342072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.342093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.342101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.352177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.352198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.352207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.361252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.361272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.370970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.370994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.371002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.383894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.383915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.383923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.396670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.396691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.396699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.406733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.406752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.415845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.415867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.415875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.427427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.427448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.427456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.439188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.439209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.439217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.450033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.450054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.450062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.458271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.458293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.458301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.470118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.470139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.470147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.480194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.480216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.480224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.488114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.817 [2024-11-20 10:43:24.488134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.817 [2024-11-20 10:43:24.488143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.817 [2024-11-20 10:43:24.498639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.818 [2024-11-20 10:43:24.498661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.818 [2024-11-20 10:43:24.498673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.818 [2024-11-20 10:43:24.509703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.818 [2024-11-20 10:43:24.509724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.818 [2024-11-20 10:43:24.509732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.818 [2024-11-20 10:43:24.519689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.818 [2024-11-20 10:43:24.519711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.818 [2024-11-20 10:43:24.519719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.818 [2024-11-20 10:43:24.528832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.818 [2024-11-20 10:43:24.528853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.818 [2024-11-20 10:43:24.528861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.818 [2024-11-20 10:43:24.540461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:23.818 [2024-11-20 10:43:24.540482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.818 [2024-11-20 10:43:24.540491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.552567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.552596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.561760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.561781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.561789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.573362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.573383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.573391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.586467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.586488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.586496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.599339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.599366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.599374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.610597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.610618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.610626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.619678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.619699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.619707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.632258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.632279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.632288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.076 [2024-11-20 10:43:24.640747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.076 [2024-11-20 10:43:24.640767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-11-20 10:43:24.640776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.653618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.653639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.665937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.665972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.673757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.673778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.673786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.684214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.684236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.695309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.695331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.695340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.704497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.704517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.704526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.713122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.713143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.713150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.724689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.724709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.724718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.734674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.734694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.734703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.743485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.743506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.743514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.755206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.755227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.755244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.765195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.765217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.765225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.774313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.774335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.774347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.783828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.783849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.783857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.793558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.077 [2024-11-20 10:43:24.805154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.077 [2024-11-20 10:43:24.805176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.077 [2024-11-20 10:43:24.805184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.335 [2024-11-20 10:43:24.814516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.335 [2024-11-20 10:43:24.814537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.335 [2024-11-20 10:43:24.814545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.335 [2024-11-20 10:43:24.823190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.335 [2024-11-20 10:43:24.823210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.335 [2024-11-20 10:43:24.823218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.335 [2024-11-20 10:43:24.834460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.335 [2024-11-20 10:43:24.834481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.335 [2024-11-20 10:43:24.834489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.335 [2024-11-20 10:43:24.843450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1486370) 00:26:24.335 [2024-11-20 10:43:24.843471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.335 [2024-11-20 10:43:24.843479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.335 24489.00 IOPS, 95.66 MiB/s 00:26:24.335 Latency(us) 00:26:24.335 [2024-11-20T09:43:25.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.335 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:24.335 nvme0n1 : 2.00 24496.21 95.69 0.00 0.00 5219.93 2507.46 20629.59 00:26:24.335 [2024-11-20T09:43:25.066Z] =================================================================================================================== 00:26:24.335 [2024-11-20T09:43:25.066Z] Total : 24496.21 95.69 0.00 0.00 5219.93 2507.46 20629.59 00:26:24.335 { 00:26:24.335 "results": [ 00:26:24.335 { 00:26:24.335 "job": "nvme0n1", 00:26:24.335 "core_mask": "0x2", 00:26:24.335 "workload": "randread", 00:26:24.335 "status": "finished", 00:26:24.335 "queue_depth": 128, 00:26:24.335 "io_size": 4096, 00:26:24.335 "runtime": 2.003861, 00:26:24.335 "iops": 24496.210066466687, 00:26:24.335 "mibps": 95.6883205721355, 00:26:24.335 "io_failed": 0, 00:26:24.335 "io_timeout": 0, 00:26:24.335 "avg_latency_us": 5219.928779088769, 00:26:24.335 "min_latency_us": 2507.464347826087, 00:26:24.335 "max_latency_us": 20629.59304347826 00:26:24.335 } 00:26:24.335 ], 00:26:24.335 "core_count": 1 00:26:24.335 } 00:26:24.335 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:24.335 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:24.335 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:24.335 | .driver_specific 00:26:24.335 | .nvme_error 00:26:24.335 | .status_code 00:26:24.335 | .command_transient_transport_error' 00:26:24.335 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3633577 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3633577 ']' 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3633577 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3633577 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3633577' 00:26:24.594 killing process with pid 3633577 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3633577 00:26:24.594 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.594 00:26:24.594 Latency(us) 00:26:24.594 [2024-11-20T09:43:25.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.594 [2024-11-20T09:43:25.325Z] =================================================================================================================== 00:26:24.594 [2024-11-20T09:43:25.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3633577 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3634058 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3634058 /var/tmp/bperf.sock 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3634058 ']' 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.594 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.853 [2024-11-20 10:43:25.332898] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:24.853 [2024-11-20 10:43:25.332946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634058 ] 00:26:24.853 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.853 Zero copy mechanism will not be used. 00:26:24.853 [2024-11-20 10:43:25.406496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.853 [2024-11-20 10:43:25.450418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.853 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.853 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:24.853 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.853 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.111 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.112 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.112 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.112 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.112 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.112 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.375 nvme0n1 00:26:25.375 10:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:25.375 10:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.375 10:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.375 10:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.375 10:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.375 10:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.635 Zero copy mechanism will not be used. 00:26:25.635 Running I/O for 2 seconds... 00:26:25.635 [2024-11-20 10:43:26.147856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.147892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.147904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.153515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.153540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.153549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.158900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.158923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.158932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.162455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.162476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.162485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.166657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.166679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.166687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.172042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.172065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.172073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.177212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.177235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.177244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.182709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.182732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.182741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.188482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.188506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.188518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.193962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.193984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.193992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.199546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.199568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.199577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.204927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.204953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.204961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.209914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.209937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.209946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.215588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.215610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.215619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.221231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.221254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.221263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.226826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.226849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.226857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.232410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.232432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.232440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.238034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.238060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.238068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.243611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.243634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.243642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.249203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.249225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.249233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.635 [2024-11-20 10:43:26.255120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.635 [2024-11-20 10:43:26.255142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.635 [2024-11-20 10:43:26.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.260898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.260920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.260928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.266369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.266390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.266399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.269365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.269386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.269395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.274825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.274847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.274855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.280308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.280329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.280338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.286225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.286247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.286256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.291889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.291911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.291919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.297275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.297297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.297305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.302778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.302800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.302808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.307819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.307842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.307850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.313301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.313324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.313333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.318108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.318130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.318139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.322699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.322721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.322730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.327822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.327845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.327860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.332988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.333010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.333017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.338018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.338040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.338048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.343367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.343389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.343397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.348971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.348993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.349001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.354403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.354425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.354434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.636 [2024-11-20 10:43:26.359888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.636 [2024-11-20 10:43:26.359910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.636 [2024-11-20 10:43:26.359918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.365309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.365332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.365341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.370608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.370629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.370638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.376028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.376054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.376062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.381514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.381537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.381546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.387082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.387105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.387113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.392569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.392591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.392599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.397888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.397909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.397917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.403255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.403277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.403286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.408472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.408495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.408503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.413822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.413844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.413852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.419192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.419214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.419222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.424700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.424723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.424731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.430280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.430302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.430310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.436067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.436089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.441509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.441532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.441541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.446926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.446954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.446963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.452428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.452450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.452458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.457787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.457808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.457816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.463045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.463068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.463076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.468482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.468504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.468516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.473829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.473851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.473859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.479178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.479200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.479219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.484680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.484702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.484710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.490177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.490199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.490207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.495698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.495719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.897 [2024-11-20 10:43:26.495727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.897 [2024-11-20 10:43:26.501126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.897 [2024-11-20 10:43:26.501148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.501157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.506414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.506436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.506444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.511810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.511832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.511840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.517205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.517228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.517237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.522537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.522558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.522567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.527836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.527858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.527867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.533357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.533380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.533388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.539988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.540010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.540018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.545741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.545763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.545771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.551190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.551211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.551220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.556712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.556734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.556743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.562230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.562252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.562264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.567737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.567760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.567768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.574033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.574056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.574065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.582221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.582244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.582253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.588247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.588269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.588277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.592440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.592463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.592471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.599318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.599341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.599349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.606687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.606710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.606719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.614409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.614431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.614440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.898 [2024-11-20 10:43:26.621526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:25.898 [2024-11-20 10:43:26.621553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.898 [2024-11-20 10:43:26.621562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.628525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.628548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.628557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.635557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.635579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.635587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.642196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.642218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.642226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.650039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.650062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.650071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.657683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.657707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.665489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.665511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.665520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.672796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.672818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.672827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.680423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.680454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.688244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.688268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.688277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.695984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.696006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.703813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.703836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.703845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.158 [2024-11-20 10:43:26.712025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.158 [2024-11-20 10:43:26.712047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.158 [2024-11-20 10:43:26.712055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.718870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.718900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.727005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.727026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.727035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.734331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.734352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.734361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.741905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.741926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.741935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.749606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.749627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.749640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.757621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.757643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.757651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.764217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.764238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.764247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.770518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.770539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.776643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.776665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.776673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.783144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.783167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.783175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.788163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.788186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.788195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.794426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.794449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.794458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.800590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.800613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.800621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.806202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.806229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.806237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.811880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.811902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.811910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.817626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.817648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.817656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.823071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.823092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.823100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.828600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.828622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.828631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.834591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.834613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.834622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.840292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.840314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.840322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.846007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.846029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.846037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.851576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.851599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.851607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.857044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.857065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.857073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.862703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.159 [2024-11-20 10:43:26.862726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.159 [2024-11-20 10:43:26.862736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.159 [2024-11-20 10:43:26.868322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.160 [2024-11-20 10:43:26.868345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.160 [2024-11-20 10:43:26.868354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.160 [2024-11-20 10:43:26.873653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.160 [2024-11-20 10:43:26.873675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.160 [2024-11-20 10:43:26.873684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.160 [2024-11-20 10:43:26.879116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.160 [2024-11-20 10:43:26.879140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.160 [2024-11-20 10:43:26.879147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.160 [2024-11-20 10:43:26.884638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.160 [2024-11-20 10:43:26.884661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.160 [2024-11-20 10:43:26.884669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.419 [2024-11-20 10:43:26.890166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.419 [2024-11-20 10:43:26.890188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.419 [2024-11-20 10:43:26.890196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.419 [2024-11-20 10:43:26.895889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.419 [2024-11-20 10:43:26.895911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.419 [2024-11-20 10:43:26.895919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.419 [2024-11-20 10:43:26.901692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.419 [2024-11-20 10:43:26.901714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.419 [2024-11-20 10:43:26.901726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.419 [2024-11-20 10:43:26.907485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.419 [2024-11-20 10:43:26.907508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.419 [2024-11-20 10:43:26.907516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.419 [2024-11-20 10:43:26.913053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.419 [2024-11-20 10:43:26.913090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.419 [2024-11-20 10:43:26.913099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.419 [2024-11-20 10:43:26.918429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.419 [2024-11-20 10:43:26.918450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.923796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.923818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.923826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.929119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.929140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.929149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.934363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.934385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.934393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.939664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.939685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.939693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.944944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.944972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.944980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.950302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.950327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.950334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.955598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.955620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.955628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.960871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.960891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.960899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.966253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.966275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.966283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.971568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.971590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.971599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.976969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.976991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.976999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.982433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.982455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.982463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.987813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.987835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.987843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.993185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.993217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.993225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:26.998559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:26.998583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:26.998593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.003994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.004024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.009333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.009354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.009363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.014693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.014715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.014723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.020065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.020088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.020096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.025411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.025434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.025442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.030875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.030897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.030905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.036269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.036292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.036300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.041608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.041642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.046973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.046996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.047003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.052250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.052273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.052281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.057690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.057713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.420 [2024-11-20 10:43:27.057721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.420 [2024-11-20 10:43:27.063047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.420 [2024-11-20 10:43:27.063069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.063077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.068412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.068434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.068442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.073805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.073827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.073835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.079158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.079180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.079187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.084444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.084466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.084475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.089665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.089693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.089701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.094904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.094925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.094934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.100148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.100170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.100178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.105512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.105534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.105542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.110850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.110872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.110881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.116249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.116271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.116279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.121926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.121956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.121965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.128710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.128734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.128744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.136072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.136097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.136105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.421 [2024-11-20 10:43:27.143643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.421 [2024-11-20 10:43:27.143669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.421 [2024-11-20 10:43:27.143678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.681 5370.00 IOPS, 671.25 MiB/s [2024-11-20T09:43:27.412Z] [2024-11-20 10:43:27.152369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.681 [2024-11-20 10:43:27.152395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.681 [2024-11-20 10:43:27.152404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.159887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.159911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.159921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.167259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.167283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.167292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.174817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.174841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.174850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.182933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.182968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.182994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.191493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.191517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.191526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.199572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.199596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.199606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.208083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.208111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.208120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.216251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.216275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.216283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.224541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.224565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.224575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.232514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.232538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.232547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.240255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.240280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.240290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.248549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.248574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.248583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.256787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.256812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.256821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.265006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.265031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.265040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.271426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.271450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.271458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.277058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.277082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.277090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.282439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.282461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.282469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.288589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.288613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.288621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.296005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.296028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.303167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.303192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.303200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.311282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.682 [2024-11-20 10:43:27.311305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.682 [2024-11-20 10:43:27.311314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.682 [2024-11-20 10:43:27.319918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.319956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.319965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.328293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.328326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.336734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.336757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.336770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.345286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.345310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.345319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.353601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.353625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.353633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.361540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.361563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.361572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.370320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.370344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.370352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.377925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.377955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.377965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.386407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.386431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.386440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.394240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.394263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.394272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.683 [2024-11-20 10:43:27.402388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.683 [2024-11-20 10:43:27.402412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.683 [2024-11-20 10:43:27.402421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.943 [2024-11-20 10:43:27.410475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.943 [2024-11-20 10:43:27.410503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.943 [2024-11-20 10:43:27.410512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.943 [2024-11-20 10:43:27.417337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.943 [2024-11-20 10:43:27.417361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.943 [2024-11-20 10:43:27.417370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.943 [2024-11-20 10:43:27.423222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.943 [2024-11-20 10:43:27.423244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.943 [2024-11-20 10:43:27.423253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.428655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.428677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.428686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.434436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.434458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.434466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.440078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.440100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.440109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.445921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.445943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.445958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.452389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.452412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.452420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.459964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.459987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.466885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.466908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.466917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.474194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.474217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.474225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.481043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.481066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.481075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.487307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.487328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.487336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.492667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.492690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.492698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.498125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.498147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.498155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.503469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.503491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.503499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.508791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.508812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.508820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.514190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.514212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.514224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.519543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.519565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.519573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.524857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.524879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.524887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.530218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.530239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.530248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.535487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.535509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.535517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.540763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.540784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.540792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.545979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.546001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.546008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.551244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.551266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.551274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.556579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.556600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.556608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.561876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.561901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.561910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.567097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.567119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.567129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.572020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.944 [2024-11-20 10:43:27.572042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.944 [2024-11-20 10:43:27.572051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.944 [2024-11-20 10:43:27.577276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.577298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.577306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.582330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.582351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.582359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.587464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.587487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.587495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.592573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.592594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.592602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.597778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.597800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.597808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.603158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.603180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.603187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.608491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.608513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.608520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.613753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.613775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.613783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.618966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.618988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.618996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.624278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.624300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.624308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.630532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.630559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.630569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.636016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.636039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.636048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.641392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.641414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.641423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.646575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.646596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.646604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.649372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.649394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.649406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.654613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.654635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.654643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.660226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.660248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.665659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.665689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.945 [2024-11-20 10:43:27.670848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:26.945 [2024-11-20 10:43:27.670870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.945 [2024-11-20 10:43:27.670878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.676213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.676236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.676244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.681588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.681618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.686822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.686844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.686852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.692399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.692421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.692428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.697790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.697811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.697820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.703110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.703131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.703139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.708288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.708310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.708318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.713258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.713281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.718198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.718220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.718228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.723122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.723144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.723153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.728241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.728263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.728272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.733915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.733937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.733944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.739517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.739539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.205 [2024-11-20 10:43:27.739551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.205 [2024-11-20 10:43:27.744855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.205 [2024-11-20 10:43:27.744877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.744884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.750143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.750165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.750173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.755499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.755521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.755528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.760745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.760767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.760775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.766025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.766046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.766055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.771368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.771389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.771397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.776644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.776666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.776674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.781889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.781910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.781918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.787188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.787213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.787221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.792451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.792473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.792481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.797668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.797689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.797697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.802963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.802985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.802993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.808209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.808231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.808239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.813462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.813484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.813492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.818705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.818727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.818735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.823957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.823978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.823986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.829232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.829254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.829262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.834532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.834553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.834561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.839809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.839831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.839839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.845075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.845097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.845105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.850358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.850378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.850387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.855585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.855606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.206 [2024-11-20 10:43:27.855614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.206 [2024-11-20 10:43:27.860881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.206 [2024-11-20 10:43:27.860903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.860911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.866152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.866174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.866182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.871416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.871438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.871446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.876707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.876729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.876743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.882072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.882094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.882102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.887372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.887393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.887402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.892620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.892642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.892650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.897906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.897928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.897936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.903165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.903187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.903195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.909161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.909183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.909191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.914813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.914835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.914843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.920172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.920193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.920201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.925422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.925447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.925455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.207 [2024-11-20 10:43:27.930734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.207 [2024-11-20 10:43:27.930756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.207 [2024-11-20 10:43:27.930764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.936069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.936090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.936098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.941409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.941430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.941438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.946653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.946675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.946683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.951936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.951963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.957182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.957204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.957212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.962391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.962412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.962420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.967602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.967623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.967631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.972909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.972931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.972939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.978142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.978163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.978171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.983426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.983446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.983454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.988744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.988765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.988773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.994022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.994044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.467 [2024-11-20 10:43:27.994052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.467 [2024-11-20 10:43:27.999226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.467 [2024-11-20 10:43:27.999249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:27.999257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.004488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.004509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.004518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.009727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.009748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.009756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.014989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.015011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.015023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.020398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.020419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.020427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.025976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.025998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.026006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.031609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.031631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.031639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.037311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.037333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.037340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.043079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.043102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.043110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.048654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.048676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.048684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.054264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.054286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.054294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.059898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.059919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.059927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.065393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.065434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.065443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.070926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.070955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.070964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.076632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.076654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.076662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.082221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.082243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.082251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.087919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.087941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.087955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.093653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.093675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.093683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.099419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.099442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.099450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.104978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.105000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.105008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.110607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.110630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.110638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.116257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.116280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.121858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.121880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.121888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.127479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.127502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.127510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.133064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.133086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.133094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.138599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.138621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.138630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.468 [2024-11-20 10:43:28.144436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbdd580) 00:26:27.468 [2024-11-20 10:43:28.144459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.468 [2024-11-20 10:43:28.144467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.468 5306.00 IOPS, 663.25 MiB/s 00:26:27.468 Latency(us) 00:26:27.468 [2024-11-20T09:43:28.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:27.469 nvme0n1 : 2.00 5307.33 663.42 0.00 0.00 3011.79 644.67 9061.06 00:26:27.469 [2024-11-20T09:43:28.200Z] =================================================================================================================== 00:26:27.469 [2024-11-20T09:43:28.200Z] Total : 5307.33 663.42 0.00 0.00 3011.79 644.67 9061.06 00:26:27.469 { 00:26:27.469 "results": [ 00:26:27.469 { 00:26:27.469 "job": "nvme0n1", 00:26:27.469 "core_mask": "0x2", 00:26:27.469 "workload": "randread", 00:26:27.469 "status": "finished", 00:26:27.469 "queue_depth": 16, 00:26:27.469 "io_size": 131072, 00:26:27.469 "runtime": 2.002514, 00:26:27.469 "iops": 5307.328687839386, 00:26:27.469 "mibps": 663.4160859799232, 00:26:27.469 "io_failed": 0, 00:26:27.469 "io_timeout": 0, 00:26:27.469 "avg_latency_us": 3011.7907571468313, 00:26:27.469 "min_latency_us": 644.6747826086956, 00:26:27.469 "max_latency_us": 9061.064347826086 00:26:27.469 } 00:26:27.469 ], 00:26:27.469 "core_count": 1 00:26:27.469 } 00:26:27.469 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:27.469 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:27.469 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:27.469 | .driver_specific 00:26:27.469 | .nvme_error 00:26:27.469 | .status_code 00:26:27.469 | .command_transient_transport_error' 00:26:27.469 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3634058 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3634058 ']' 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3634058 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3634058 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3634058' 00:26:27.728 killing process with pid 3634058 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3634058 00:26:27.728 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.728 00:26:27.728 Latency(us) 00:26:27.728 [2024-11-20T09:43:28.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.728 [2024-11-20T09:43:28.459Z] =================================================================================================================== 00:26:27.728 [2024-11-20T09:43:28.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.728 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3634058 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3634597 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3634597 /var/tmp/bperf.sock 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3634597 ']' 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:27.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.987 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.987 [2024-11-20 10:43:28.630615] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:27.987 [2024-11-20 10:43:28.630667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634597 ] 00:26:27.987 [2024-11-20 10:43:28.706345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.246 [2024-11-20 10:43:28.745952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.246 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.246 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.246 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.246 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.505 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.505 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.505 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.505 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.505 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.505 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.764 nvme0n1 00:26:28.764 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:28.764 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.764 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.764 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.764 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:28.764 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:28.764 Running I/O for 2 seconds... 00:26:28.764 [2024-11-20 10:43:29.431070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ef6a8 00:26:28.764 [2024-11-20 10:43:29.432193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.764 [2024-11-20 10:43:29.432226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.764 [2024-11-20 10:43:29.440575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e1f80 00:26:28.764 [2024-11-20 10:43:29.441216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.764 [2024-11-20 10:43:29.441246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.764 [2024-11-20 10:43:29.450267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fa3a0 00:26:28.764 [2024-11-20 10:43:29.451028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.765 [2024-11-20 10:43:29.451049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.765 [2024-11-20 10:43:29.459007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2510 00:26:28.765 [2024-11-20 10:43:29.460326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.765 [2024-11-20 10:43:29.460346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.765 [2024-11-20 10:43:29.466915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e01f8 00:26:28.765 [2024-11-20 10:43:29.467664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.765 [2024-11-20 10:43:29.467683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.765 [2024-11-20 10:43:29.478402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e3d08 00:26:28.765 [2024-11-20 10:43:29.479732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.765 [2024-11-20 10:43:29.479751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.765 [2024-11-20 10:43:29.486851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e7c50 00:26:28.765 [2024-11-20 10:43:29.488208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.765 [2024-11-20 10:43:29.488228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.495018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ff3c8 00:26:29.024 [2024-11-20 10:43:29.495770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.495790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.504751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ed0b0 00:26:29.024 [2024-11-20 10:43:29.505603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.505623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.516018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f4b08 00:26:29.024 [2024-11-20 10:43:29.517263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.517283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.522872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2948 00:26:29.024 [2024-11-20 10:43:29.523610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.523632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.532486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166df118 00:26:29.024 [2024-11-20 10:43:29.533334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.533353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.543899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e4140 00:26:29.024 [2024-11-20 10:43:29.545384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.545404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.550666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e27f0 00:26:29.024 [2024-11-20 10:43:29.551390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.551409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.560262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eee38 00:26:29.024 [2024-11-20 10:43:29.561117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.561136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.570053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e0a68 00:26:29.024 [2024-11-20 10:43:29.571019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.571037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.579649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f96f8 00:26:29.024 [2024-11-20 10:43:29.580732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.024 [2024-11-20 10:43:29.580751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.024 [2024-11-20 10:43:29.588170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e01f8 00:26:29.025 [2024-11-20 10:43:29.588821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.588839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.597537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fe720 00:26:29.025 [2024-11-20 10:43:29.598388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.598407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.607216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166dece0 00:26:29.025 [2024-11-20 10:43:29.608213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.608233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.616844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fdeb0 00:26:29.025 [2024-11-20 10:43:29.617963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.617983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.626752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e8d30 00:26:29.025 [2024-11-20 10:43:29.628080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.628099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.636381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fe720 00:26:29.025 [2024-11-20 10:43:29.637822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.637841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.644875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e6300 00:26:29.025 [2024-11-20 10:43:29.645872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.645891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.654506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8e88 00:26:29.025 [2024-11-20 10:43:29.655983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.656002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.663211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e9e10 00:26:29.025 [2024-11-20 10:43:29.664096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.664116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.672792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f9b30 00:26:29.025 [2024-11-20 10:43:29.673995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.674013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.682181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6890 00:26:29.025 [2024-11-20 10:43:29.683392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.683412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.690967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f96f8 00:26:29.025 [2024-11-20 10:43:29.691892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.691911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.700170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fc560 00:26:29.025 [2024-11-20 10:43:29.701046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.701065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.711850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8a50 00:26:29.025 [2024-11-20 10:43:29.713422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.713443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.718362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166feb58 00:26:29.025 [2024-11-20 10:43:29.719102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.719122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.727969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ee190 00:26:29.025 [2024-11-20 10:43:29.728743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.728763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.737610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f1430 00:26:29.025 [2024-11-20 10:43:29.738707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.738726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.025 [2024-11-20 10:43:29.746911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e23b8 00:26:29.025 [2024-11-20 10:43:29.747930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.025 [2024-11-20 10:43:29.747952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.284 [2024-11-20 10:43:29.756118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f31b8 00:26:29.284 [2024-11-20 10:43:29.757257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.284 [2024-11-20 10:43:29.757276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.284 [2024-11-20 10:43:29.764967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8618 00:26:29.284 [2024-11-20 10:43:29.765797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.284 [2024-11-20 10:43:29.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.284 [2024-11-20 10:43:29.774164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2948 00:26:29.284 [2024-11-20 10:43:29.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.284 [2024-11-20 10:43:29.774913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.284 [2024-11-20 10:43:29.783763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fef90 00:26:29.284 [2024-11-20 10:43:29.784667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.284 [2024-11-20 10:43:29.784686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.284 [2024-11-20 10:43:29.792524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e7818 00:26:29.284 [2024-11-20 10:43:29.793407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.793425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.801791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e73e0 00:26:29.285 [2024-11-20 10:43:29.802228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.811130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8e88 00:26:29.285 [2024-11-20 10:43:29.811797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.811816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.820053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ed4e8 00:26:29.285 [2024-11-20 10:43:29.820507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.820526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.831739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e95a0 00:26:29.285 [2024-11-20 10:43:29.833225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.833244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.838464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166dfdc0 00:26:29.285 [2024-11-20 10:43:29.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.839252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.849796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e3498 00:26:29.285 [2024-11-20 10:43:29.850944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.850975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.859551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e88f8 00:26:29.285 [2024-11-20 10:43:29.860929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.860952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.869208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fef90 00:26:29.285 [2024-11-20 10:43:29.870693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.875676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eff18 00:26:29.285 [2024-11-20 10:43:29.876346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.876365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.884399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e4de8 00:26:29.285 [2024-11-20 10:43:29.885059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.885077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.895644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fb480 00:26:29.285 [2024-11-20 10:43:29.896681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.896700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.903868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fac10 00:26:29.285 [2024-11-20 10:43:29.904751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.904771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.914649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e23b8 00:26:29.285 [2024-11-20 10:43:29.915953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.915973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.923527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6890 00:26:29.285 [2024-11-20 10:43:29.924545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.924565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.932654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eb760 00:26:29.285 [2024-11-20 10:43:29.933371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.933392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.942357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ee5c8 00:26:29.285 [2024-11-20 10:43:29.942932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.942959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.952773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f7da8 00:26:29.285 [2024-11-20 10:43:29.953963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.953983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.960446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2510 00:26:29.285 [2024-11-20 10:43:29.961162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.961182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.970010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6458 00:26:29.285 [2024-11-20 10:43:29.970946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.970971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.979238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6020 00:26:29.285 [2024-11-20 10:43:29.980075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.980093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.988655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6890 00:26:29.285 [2024-11-20 10:43:29.989512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.989531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:29.998086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6458 00:26:29.285 [2024-11-20 10:43:29.998777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:29.998797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.285 [2024-11-20 10:43:30.006998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166dece0 00:26:29.285 [2024-11-20 10:43:30.008263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.285 [2024-11-20 10:43:30.008287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.016894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f31b8 00:26:29.545 [2024-11-20 10:43:30.017677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.017697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.026136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e12d8 00:26:29.545 [2024-11-20 10:43:30.027098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.027120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.035535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fd208 00:26:29.545 [2024-11-20 10:43:30.036324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.036344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.045445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ec840 00:26:29.545 [2024-11-20 10:43:30.046386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.046406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.057480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ef6a8 00:26:29.545 [2024-11-20 10:43:30.058755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.058777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.065382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166de038 00:26:29.545 [2024-11-20 10:43:30.065830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.065850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.074805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166edd58 00:26:29.545 [2024-11-20 10:43:30.075514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.075534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.085361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e38d0 00:26:29.545 [2024-11-20 10:43:30.086519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.086539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.094374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e0a68 00:26:29.545 [2024-11-20 10:43:30.095329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.095351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.104442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fcdd0 00:26:29.545 [2024-11-20 10:43:30.105383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.545 [2024-11-20 10:43:30.105403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.545 [2024-11-20 10:43:30.113126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e8d30 00:26:29.545 [2024-11-20 10:43:30.114129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.114149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.123024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e2c28 00:26:29.546 [2024-11-20 10:43:30.124176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.124197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.132641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fcdd0 00:26:29.546 [2024-11-20 10:43:30.133323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.133343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.141577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ddc00 00:26:29.546 [2024-11-20 10:43:30.142795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.142815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.152292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e8d30 00:26:29.546 [2024-11-20 10:43:30.153363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.153382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.161043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e8d30 00:26:29.546 [2024-11-20 10:43:30.162095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.162114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.172299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e8d30 00:26:29.546 [2024-11-20 10:43:30.173921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.173940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.179197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8618 00:26:29.546 [2024-11-20 10:43:30.180095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.180115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.188826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e1b48 00:26:29.546 [2024-11-20 10:43:30.189693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.189711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.199636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6458 00:26:29.546 [2024-11-20 10:43:30.200684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.200703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.208446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e9e10 00:26:29.546 [2024-11-20 10:43:30.209482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.209502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.219721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e9e10 00:26:29.546 [2024-11-20 10:43:30.221283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.221303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.229311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f7100 00:26:29.546 [2024-11-20 10:43:30.230833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.230852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.235972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fc998 00:26:29.546 [2024-11-20 10:43:30.236719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.236738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.247655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e95a0 00:26:29.546 [2024-11-20 10:43:30.249007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.249027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.255523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fb048 00:26:29.546 [2024-11-20 10:43:30.256346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.256369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.546 [2024-11-20 10:43:30.265006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fb048 00:26:29.546 [2024-11-20 10:43:30.265894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.546 [2024-11-20 10:43:30.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.274303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f3a28 00:26:29.806 [2024-11-20 10:43:30.275182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.275202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.284071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f92c0 00:26:29.806 [2024-11-20 10:43:30.284914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.284933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.293813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e9e10 00:26:29.806 [2024-11-20 10:43:30.294596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.294617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.303441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e5220 00:26:29.806 [2024-11-20 10:43:30.304469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.304488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.312894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e4578 00:26:29.806 [2024-11-20 10:43:30.313896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.313915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.322342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e3498 00:26:29.806 [2024-11-20 10:43:30.323347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.323366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.331773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ef6a8 00:26:29.806 [2024-11-20 10:43:30.332785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.332805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.341070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f96f8 00:26:29.806 [2024-11-20 10:43:30.342032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.342054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.350288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e84c0 00:26:29.806 [2024-11-20 10:43:30.351279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.351298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.359506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ec408 00:26:29.806 [2024-11-20 10:43:30.360495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.360513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.368700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e01f8 00:26:29.806 [2024-11-20 10:43:30.369677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.369695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.379092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8618 00:26:29.806 [2024-11-20 10:43:30.380515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.380533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.388702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e4de8 00:26:29.806 [2024-11-20 10:43:30.390319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.390339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.395239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fe2e8 00:26:29.806 [2024-11-20 10:43:30.395973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.395991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.404540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e23b8 00:26:29.806 [2024-11-20 10:43:30.405311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.405330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.413730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f81e0 00:26:29.806 [2024-11-20 10:43:30.414530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.414550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.806 27146.00 IOPS, 106.04 MiB/s [2024-11-20T09:43:30.537Z] [2024-11-20 10:43:30.424055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e4578 00:26:29.806 [2024-11-20 10:43:30.424962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.806 [2024-11-20 10:43:30.424981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.806 [2024-11-20 10:43:30.432756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eb328 00:26:29.806 [2024-11-20 10:43:30.433711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.433731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.442139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6458 00:26:29.807 [2024-11-20 10:43:30.443065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.443084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.450670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166df550 00:26:29.807 [2024-11-20 10:43:30.451412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.451431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.460124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ed4e8 00:26:29.807 [2024-11-20 10:43:30.460857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.460876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.471142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f35f0 00:26:29.807 [2024-11-20 10:43:30.472254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.472273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.480527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e12d8 00:26:29.807 [2024-11-20 10:43:30.481661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.489712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166feb58 00:26:29.807 [2024-11-20 10:43:30.490849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.490868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.498944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eb328 00:26:29.807 [2024-11-20 10:43:30.500045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.500067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.508114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f3e60 00:26:29.807 [2024-11-20 10:43:30.509217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.509236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.517311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f7da8 00:26:29.807 [2024-11-20 10:43:30.518416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.518434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.807 [2024-11-20 10:43:30.526523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e0a68 00:26:29.807 [2024-11-20 10:43:30.527653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.807 [2024-11-20 10:43:30.527671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.535904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e6738 00:26:30.067 [2024-11-20 10:43:30.537034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.537053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.545263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f7970 00:26:30.067 [2024-11-20 10:43:30.546365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.546383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.554452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e95a0 00:26:30.067 [2024-11-20 10:43:30.555551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.555570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.563650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f5be8 00:26:30.067 [2024-11-20 10:43:30.564747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.564765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.572891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166efae0 00:26:30.067 [2024-11-20 10:43:30.573992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.574010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.582221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ec408 00:26:30.067 [2024-11-20 10:43:30.583321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.583343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.591478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166dfdc0 00:26:30.067 [2024-11-20 10:43:30.592577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.592596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.599936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8618 00:26:30.067 [2024-11-20 10:43:30.601256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.601275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.607855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f7538 00:26:30.067 [2024-11-20 10:43:30.608580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.608598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.618145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166de470 00:26:30.067 [2024-11-20 10:43:30.619015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.619034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.627323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ff3c8 00:26:30.067 [2024-11-20 10:43:30.628209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.628228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.636547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e4de8 00:26:30.067 [2024-11-20 10:43:30.637425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.637444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.645738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e84c0 00:26:30.067 [2024-11-20 10:43:30.646620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.646641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.654308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fda78 00:26:30.067 [2024-11-20 10:43:30.655153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.655172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.663834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166de038 00:26:30.067 [2024-11-20 10:43:30.664681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.664702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.675459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fc128 00:26:30.067 [2024-11-20 10:43:30.676924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.676944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.685131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2d80 00:26:30.067 [2024-11-20 10:43:30.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.686708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.067 [2024-11-20 10:43:30.691667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e99d8 00:26:30.067 [2024-11-20 10:43:30.692394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.067 [2024-11-20 10:43:30.692413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.700626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f0788 00:26:30.068 [2024-11-20 10:43:30.701369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.701388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.710228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e1710 00:26:30.068 [2024-11-20 10:43:30.711096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.711115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.719852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f1430 00:26:30.068 [2024-11-20 10:43:30.720820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.720838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.730105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e5658 00:26:30.068 [2024-11-20 10:43:30.731212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.731231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.739850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f1430 00:26:30.068 [2024-11-20 10:43:30.741067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.741086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.747369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2d80 00:26:30.068 [2024-11-20 10:43:30.748124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.748145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.756525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f57b0 00:26:30.068 [2024-11-20 10:43:30.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.757315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.765702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6cc8 00:26:30.068 [2024-11-20 10:43:30.766466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.766485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.774902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eff18 00:26:30.068 [2024-11-20 10:43:30.775667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.775686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.784103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fb480 00:26:30.068 [2024-11-20 10:43:30.784834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.784853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.068 [2024-11-20 10:43:30.793413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fc998 00:26:30.068 [2024-11-20 10:43:30.794198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.068 [2024-11-20 10:43:30.794217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.802838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166dece0 00:26:30.328 [2024-11-20 10:43:30.803599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.803618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.812122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e5658 00:26:30.328 [2024-11-20 10:43:30.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.812906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.822517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166edd58 00:26:30.328 [2024-11-20 10:43:30.823720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.823741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.831847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fd208 00:26:30.328 [2024-11-20 10:43:30.833089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.833108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.840863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f35f0 00:26:30.328 [2024-11-20 10:43:30.842075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.842093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.848196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f20d8 00:26:30.328 [2024-11-20 10:43:30.848909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.848927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.857712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166edd58 00:26:30.328 [2024-11-20 10:43:30.858460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.858479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.867191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e95a0 00:26:30.328 [2024-11-20 10:43:30.867916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.867935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.876594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f5be8 00:26:30.328 [2024-11-20 10:43:30.877359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.877378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.885838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eaab8 00:26:30.328 [2024-11-20 10:43:30.886594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.886614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.895126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e7818 00:26:30.328 [2024-11-20 10:43:30.895869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.895888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.904313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eea00 00:26:30.328 [2024-11-20 10:43:30.905050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.905069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.913512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f0ff8 00:26:30.328 [2024-11-20 10:43:30.914258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.914277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.922787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e7c50 00:26:30.328 [2024-11-20 10:43:30.923534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.923553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.931985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f96f8 00:26:30.328 [2024-11-20 10:43:30.932723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-11-20 10:43:30.932742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.328 [2024-11-20 10:43:30.941194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e3498 00:26:30.328 [2024-11-20 10:43:30.941934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.941957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:30.950432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fd208 00:26:30.329 [2024-11-20 10:43:30.951196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.951226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:30.959766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166de038 00:26:30.329 [2024-11-20 10:43:30.960515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.960534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:30.968935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f92c0 00:26:30.329 [2024-11-20 10:43:30.969682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.969701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:30.978183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f4298 00:26:30.329 [2024-11-20 10:43:30.978927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.978946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:30.987387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fac10 00:26:30.329 [2024-11-20 10:43:30.988135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.988155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:30.996656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ebfd0 00:26:30.329 [2024-11-20 10:43:30.997396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:30.997415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:31.005845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f46d0 00:26:30.329 [2024-11-20 10:43:31.006575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:31.006594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:31.015080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e5220 00:26:30.329 [2024-11-20 10:43:31.015818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:31.015837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:31.024339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f0788 00:26:30.329 [2024-11-20 10:43:31.025079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:31.025098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:31.033534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e0630 00:26:30.329 [2024-11-20 10:43:31.034289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:31.034308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:31.042734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e1710 00:26:30.329 [2024-11-20 10:43:31.043472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:31.043491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.329 [2024-11-20 10:43:31.051964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e0a68 00:26:30.329 [2024-11-20 10:43:31.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-11-20 10:43:31.052737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.061440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ec408 00:26:30.589 [2024-11-20 10:43:31.062209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.062231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.070761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f3e60 00:26:30.589 [2024-11-20 10:43:31.071535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.071553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.080034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eb328 00:26:30.589 [2024-11-20 10:43:31.080770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.080788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.089235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f31b8 00:26:30.589 [2024-11-20 10:43:31.089985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.090004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.098523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f35f0 00:26:30.589 [2024-11-20 10:43:31.099265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.099283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.107722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f4f40 00:26:30.589 [2024-11-20 10:43:31.108471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.108488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.116887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166de470 00:26:30.589 [2024-11-20 10:43:31.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.117658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.126102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ff3c8 00:26:30.589 [2024-11-20 10:43:31.126820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.126838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.135287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166efae0 00:26:30.589 [2024-11-20 10:43:31.136006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.136024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.144488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ea248 00:26:30.589 [2024-11-20 10:43:31.145219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.145238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.153698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ea680 00:26:30.589 [2024-11-20 10:43:31.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.154456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.162945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e5a90 00:26:30.589 [2024-11-20 10:43:31.163700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.163718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.172205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e95a0 00:26:30.589 [2024-11-20 10:43:31.172924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.172943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.181391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f5be8 00:26:30.589 [2024-11-20 10:43:31.182039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.182058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.190629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eaab8 00:26:30.589 [2024-11-20 10:43:31.191395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.191414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.199963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e7818 00:26:30.589 [2024-11-20 10:43:31.200718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.200737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.209490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eea00 00:26:30.589 [2024-11-20 10:43:31.210233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.210252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.218682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f0ff8 00:26:30.589 [2024-11-20 10:43:31.219426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.219445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.227899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e7c50 00:26:30.589 [2024-11-20 10:43:31.228644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.589 [2024-11-20 10:43:31.228663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.589 [2024-11-20 10:43:31.238368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f96f8 00:26:30.590 [2024-11-20 10:43:31.239495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.239514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.247636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f46d0 00:26:30.590 [2024-11-20 10:43:31.248543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.248564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.256763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166df118 00:26:30.590 [2024-11-20 10:43:31.257652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.257673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.265936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166edd58 00:26:30.590 [2024-11-20 10:43:31.266815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.266835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.274419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e9168 00:26:30.590 [2024-11-20 10:43:31.275284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.275304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.283573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166ebfd0 00:26:30.590 [2024-11-20 10:43:31.284429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.284447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.292750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f7970 00:26:30.590 [2024-11-20 10:43:31.293714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.293733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.302104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f2d80 00:26:30.590 [2024-11-20 10:43:31.303023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.303047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.590 [2024-11-20 10:43:31.311553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166eaab8 00:26:30.590 [2024-11-20 10:43:31.312476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.590 [2024-11-20 10:43:31.312495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.321638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e9e10 00:26:30.849 [2024-11-20 10:43:31.322836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.322855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.332578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166de470 00:26:30.849 [2024-11-20 10:43:31.334139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.334158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.339208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f3e60 00:26:30.849 [2024-11-20 10:43:31.340048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.340068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.350347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f6cc8 00:26:30.849 [2024-11-20 10:43:31.351450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.351470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.359275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e49b0 00:26:30.849 [2024-11-20 10:43:31.360355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.360374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.368932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fc560 00:26:30.849 [2024-11-20 10:43:31.370145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.370165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.377475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166fa7d8 00:26:30.849 [2024-11-20 10:43:31.378445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.378464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.386310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f5be8 00:26:30.849 [2024-11-20 10:43:31.386956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.386980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.394921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e49b0 00:26:30.849 [2024-11-20 10:43:31.395641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.395660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.404541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166f8e88 00:26:30.849 [2024-11-20 10:43:31.405301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.405321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.849 [2024-11-20 10:43:31.415695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfec640) with pdu=0x2000166e27f0 00:26:30.849 [2024-11-20 10:43:31.416967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.849 [2024-11-20 10:43:31.416987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.849 27335.50 IOPS, 106.78 MiB/s 00:26:30.849 Latency(us) 00:26:30.849 [2024-11-20T09:43:31.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.849 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:30.849 nvme0n1 : 2.00 27368.05 106.91 0.00 0.00 4672.70 2265.27 12708.29 00:26:30.849 [2024-11-20T09:43:31.580Z] =================================================================================================================== 00:26:30.849 [2024-11-20T09:43:31.580Z] Total : 27368.05 106.91 0.00 0.00 4672.70 2265.27 12708.29 00:26:30.849 { 00:26:30.849 "results": [ 00:26:30.849 { 00:26:30.849 "job": "nvme0n1", 00:26:30.849 "core_mask": "0x2", 00:26:30.849 "workload": "randwrite", 00:26:30.849 "status": "finished", 00:26:30.849 "queue_depth": 128, 00:26:30.849 "io_size": 4096, 00:26:30.849 "runtime": 2.002298, 00:26:30.849 "iops": 27368.054105832398, 00:26:30.849 "mibps": 106.9064613509078, 00:26:30.849 "io_failed": 0, 00:26:30.849 "io_timeout": 0, 00:26:30.849 "avg_latency_us": 4672.698075829692, 00:26:30.849 "min_latency_us": 2265.2660869565216, 00:26:30.849 "max_latency_us": 12708.285217391305 00:26:30.849 } 00:26:30.849 ], 00:26:30.849 "core_count": 1 00:26:30.849 } 00:26:30.849 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:30.849 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:30.849 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:30.849 | .driver_specific 00:26:30.849 | .nvme_error 00:26:30.849 | .status_code 00:26:30.849 | .command_transient_transport_error' 00:26:30.850 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3634597 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3634597 ']' 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3634597 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3634597 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3634597' 00:26:31.109 killing process with pid 3634597 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3634597 00:26:31.109 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.109 00:26:31.109 Latency(us) 00:26:31.109 [2024-11-20T09:43:31.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.109 [2024-11-20T09:43:31.840Z] =================================================================================================================== 00:26:31.109 [2024-11-20T09:43:31.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.109 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3634597 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3635216 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3635216 /var/tmp/bperf.sock 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3635216 ']' 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.368 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.368 [2024-11-20 10:43:31.899921] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:31.368 [2024-11-20 10:43:31.899974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635216 ] 00:26:31.368 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.368 Zero copy mechanism will not be used. 00:26:31.368 [2024-11-20 10:43:31.973971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.368 [2024-11-20 10:43:32.016855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.629 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.203 nvme0n1 00:26:32.203 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:32.203 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.203 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.203 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.203 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.203 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.203 Zero copy mechanism will not be used. 00:26:32.203 Running I/O for 2 seconds... 00:26:32.203 [2024-11-20 10:43:32.752651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.752736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.752764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.757114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.757185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.757208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.761467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.761570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.761593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.765754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.765825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.765846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.770001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.770064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.770083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.774446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.774505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.774526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.778678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.778784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.778803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.782927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.783024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.783044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.787122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.787224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.787242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.791270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.791369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.791388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.795518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.795577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.795596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.799713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.799787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.799806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.803887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.803972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.803996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.808081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.808148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.808168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.812295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.812359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.812378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.817002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.817065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.817083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.821329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.821390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.821410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.825534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.203 [2024-11-20 10:43:32.825613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.203 [2024-11-20 10:43:32.825632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.203 [2024-11-20 10:43:32.829672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.829742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.829761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.833818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.833883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.833902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.838007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.838068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.838087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.842148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.842218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.842237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.846303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.846365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.850436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.850491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.850510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.854607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.854668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.854687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.858787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.858842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.858860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.862935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.863002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.863021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.867074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.867148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.867168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.871206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.871273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.871292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.875473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.875540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.875559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.879639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.879708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.879728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.883809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.883882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.883900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.888264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.888320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.888339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.892911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.892973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.892992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.897271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.897349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.897368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.901453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.901507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.901526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.905762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.905824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.905843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.909984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.910044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.910064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.914337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.914394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.914419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.918573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.918630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.918650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.922775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.922834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.922853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.204 [2024-11-20 10:43:32.927130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.204 [2024-11-20 10:43:32.927199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.204 [2024-11-20 10:43:32.927219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.464 [2024-11-20 10:43:32.931503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.464 [2024-11-20 10:43:32.931581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.464 [2024-11-20 10:43:32.931600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.464 [2024-11-20 10:43:32.936052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.936111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.936131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.940414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.940474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.940492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.944864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.944926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.944944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.949047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.949119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.949138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.953186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.953257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.953276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.957398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.957458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.957477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.961943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.962018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.962036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.966620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.966703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.966722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.970910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.970973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.970991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.975222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.975280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.975298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.979508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.979562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.979580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.983749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.983810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.983829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.988013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.988070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.988089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.992481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.992551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.992570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:32.997353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:32.997423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:32.997441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.001753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.001820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.001838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.006076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.006144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.010556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.010614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.010633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.015016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.015076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.015095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.019599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.019658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.019676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.024074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.024136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.024155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.028392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.028454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.028473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.032758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.032812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.032830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.037151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.037218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.037237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.465 [2024-11-20 10:43:33.041645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.465 [2024-11-20 10:43:33.041701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.465 [2024-11-20 10:43:33.041720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.046397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.046464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.046482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.051563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.051619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.051638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.056866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.056971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.056990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.062099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.062189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.062209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.067334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.067399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.067418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.072193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.072254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.072276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.077234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.077327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.082658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.082714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.082733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.088096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.088151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.088169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.093417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.093560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.093580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.098726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.098857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.098876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.104357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.104434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.109835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.109911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.109930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.115101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.115154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.115172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.120533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.120675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.120694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.126018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.126081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.126099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.131445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.131501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.131520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.137234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.137313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.137333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.142367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.142426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.142445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.147478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.147565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.147584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.152705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.152774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.152793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.157897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.157961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.157981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.163499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.163571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.163590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.169082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.466 [2024-11-20 10:43:33.169203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.466 [2024-11-20 10:43:33.169221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.466 [2024-11-20 10:43:33.174198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.467 [2024-11-20 10:43:33.174250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.467 [2024-11-20 10:43:33.174269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.467 [2024-11-20 10:43:33.179542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.467 [2024-11-20 10:43:33.179602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.467 [2024-11-20 10:43:33.179620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.467 [2024-11-20 10:43:33.184650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.467 [2024-11-20 10:43:33.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.467 [2024-11-20 10:43:33.184805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.467 [2024-11-20 10:43:33.189989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.467 [2024-11-20 10:43:33.190128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.467 [2024-11-20 10:43:33.190146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.195847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.195908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.195927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.200944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.201013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.201032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.206092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.206150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.206169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.211232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.211291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.211313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.216203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.216291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.222013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.222097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.222117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.227704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.227761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.227779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.232903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.232972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.232991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.237981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.238056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.238075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.242859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.242923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.242941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.247572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.247654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.247673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.252283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.252352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.727 [2024-11-20 10:43:33.252371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.727 [2024-11-20 10:43:33.256998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.727 [2024-11-20 10:43:33.257055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.257074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.261722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.261781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.261800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.266806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.266860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.266880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.271853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.271931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.271956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.277136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.277234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.277253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.281808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.281867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.281885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.286315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.286380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.286399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.290771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.290826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.290845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.295493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.295549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.295567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.300394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.300469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.300488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.305034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.305093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.305112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.309813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.309923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.309941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.315699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.315852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.315871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.322699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.322831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.322850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.329405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.329553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.329572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.335897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.335987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.336006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.340936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.341027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.341045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.346191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.346290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.346313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.351255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.351335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.728 [2024-11-20 10:43:33.351353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.728 [2024-11-20 10:43:33.355987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.728 [2024-11-20 10:43:33.356083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.356102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.360756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.360866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.365580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.365670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.365689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.370490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.370666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.370685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.375816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.375905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.375925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.381237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.381350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.381370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.386929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.387021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.387041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.393313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.393409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.393428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.398930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.399002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.399021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.403815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.403873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.403892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.408525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.408579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.408596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.413009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.413069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.413087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.417373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.417429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.417447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.421788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.421850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.421868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.426198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.426257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.426276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.430589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.430645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.430664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.434984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.435043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.435062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.439373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.439428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.439447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.443863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.443926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.443945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.448340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.729 [2024-11-20 10:43:33.448404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.729 [2024-11-20 10:43:33.448423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.729 [2024-11-20 10:43:33.452784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.730 [2024-11-20 10:43:33.452851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.730 [2024-11-20 10:43:33.452870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.457301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.457367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.457385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.461808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.461873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.461892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.466244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.466327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.466346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.471258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.471436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.471458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.477347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.477522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.477541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.482462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.482568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.482587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.487439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.487514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.487534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.492165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.492319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.492338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.497164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.497267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.497286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.501752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.501862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.501880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.506603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.506760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.506778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.511414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.511516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.511534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.516278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.516393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.516412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.521396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.521556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.526312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.526470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.526489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.531438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.531531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.531549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.536684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.536775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.536794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.542496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.542555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.542574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.547796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.547972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.547992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.553944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.554140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.554158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.559280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.559374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.559392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.990 [2024-11-20 10:43:33.564714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.990 [2024-11-20 10:43:33.564859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.990 [2024-11-20 10:43:33.564877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.569942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.570036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.570055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.575523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.575605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.575624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.579981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.580043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.580062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.584427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.584486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.584505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.588974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.589069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.589088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.594258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.594429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.594447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.600810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.600921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.600939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.605638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.605720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.605744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.610470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.610566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.610584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.615229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.615326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.615345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.620030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.620181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.620200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.624873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.624965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.624984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.629518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.629653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.629672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.634396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.634451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.634471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.638819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.638879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.638898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.643228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.643283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.643302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.647619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.647692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.647710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.652069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.652139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.652158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.656674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.656730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.656750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.661173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.661230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.661250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.665610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.665677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.665696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.670040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.670094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.670113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.674598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.674676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.674695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.679117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.679175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.679195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.683616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.683687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.683706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.688077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.688134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.991 [2024-11-20 10:43:33.688153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.991 [2024-11-20 10:43:33.692595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.991 [2024-11-20 10:43:33.692667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.992 [2024-11-20 10:43:33.692686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.992 [2024-11-20 10:43:33.697092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.992 [2024-11-20 10:43:33.697149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.992 [2024-11-20 10:43:33.697167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.992 [2024-11-20 10:43:33.701525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.992 [2024-11-20 10:43:33.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.992 [2024-11-20 10:43:33.701602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.992 [2024-11-20 10:43:33.705964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.992 [2024-11-20 10:43:33.706021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.992 [2024-11-20 10:43:33.706039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.992 [2024-11-20 10:43:33.710359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.992 [2024-11-20 10:43:33.710428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.992 [2024-11-20 10:43:33.710447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.992 [2024-11-20 10:43:33.714820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:32.992 [2024-11-20 10:43:33.714879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.992 [2024-11-20 10:43:33.714898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.719294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.719356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.719374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.723850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.723913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.723936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.728340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.728402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.728420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.732812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.732873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.732891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.737221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.737284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.737303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.741684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.741739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.741757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.746283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.746383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.746402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.750868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.751010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.751028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.253 6493.00 IOPS, 811.62 MiB/s [2024-11-20T09:43:33.984Z] [2024-11-20 10:43:33.756597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.756654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.756674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.761079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.761141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.761160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.765531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.765596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.765615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.770006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.770060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.770079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.774543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.774599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.774618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.779034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.779103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.783478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.783534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.783552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.787868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.787931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.787955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.792276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.792341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.792359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.796797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.796870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.796889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.801210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.801273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.801292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.805657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.805715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.805734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.810085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.810151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.810170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.814486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.814547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.814565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.819085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.819154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.253 [2024-11-20 10:43:33.819173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.253 [2024-11-20 10:43:33.824696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.253 [2024-11-20 10:43:33.824894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.824913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.830666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.830841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.830860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.836721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.836873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.836892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.842345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.842647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.842668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.848099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.848454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.848480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.854083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.854409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.854430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.860020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.860343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.860364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.865966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.866266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.866287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.871780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.872096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.872118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.878169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.878496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.878516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.884102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.884449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.884470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.890188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.890531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.890551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.896049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.896358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.896378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.901980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.902293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.902317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.907685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.907986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.908007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.913747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.914061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.914081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.919844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.920111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.920132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.926029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.926292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.926312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.932389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.932643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.932664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.937302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.937553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.937575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.941942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.942203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.942223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.946651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.946902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.946923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.951545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.951808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.951829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.956380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.956630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.956651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.961402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.961669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.961689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.966194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.966462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.966482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.971034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.971285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.254 [2024-11-20 10:43:33.971305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.254 [2024-11-20 10:43:33.975986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.254 [2024-11-20 10:43:33.976243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.255 [2024-11-20 10:43:33.976263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:33.981232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:33.981488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:33.981509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:33.986295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:33.986550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:33.986571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:33.991571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:33.991831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:33.991851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:33.997263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:33.997516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:33.997537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.001794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.002051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.002072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.006299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.006564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.006585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.010780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.011035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.011055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.015596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.015848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.015869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.020157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.020439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.020459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.024737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.025013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.025035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.029297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.029547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.029568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.033744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.034001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.034025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.038230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.038481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.038502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.042666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.042915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.042935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.047149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.047398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.047418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.051564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.051816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.051836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.056068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.056334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.056354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.060482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.060733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.060753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.064909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.065164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.065184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.069341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.069590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.069610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.073802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.074059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.074080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.515 [2024-11-20 10:43:34.078305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.515 [2024-11-20 10:43:34.078568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.515 [2024-11-20 10:43:34.078588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.082783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.083039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.087233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.087484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.087504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.091643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.091894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.091914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.096263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.096528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.096549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.101153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.101416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.101438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.106542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.106790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.112338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.112586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.112607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.118525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.118775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.118797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.123374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.123631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.123652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.128146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.128398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.128418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.132650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.132901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.132921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.137234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.137483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.137504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.141715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.141971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.141991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.146184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.146436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.146457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.151014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.151265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.151286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.155564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.155826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.155851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.160098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.160370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.164601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.164863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.164884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.169104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.169355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.169375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.173615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.173878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.173898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.178174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.178429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.178449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.182668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.182931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.182960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.187208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.187461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.187482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.191718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.191989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.192009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.196275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.196549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.196569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.200796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.201066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.201086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.205287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.516 [2024-11-20 10:43:34.205540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.516 [2024-11-20 10:43:34.205561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.516 [2024-11-20 10:43:34.209774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.210040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.210062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.214305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.214569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.214591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.218777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.219034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.219055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.223402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.223675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.223695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.227990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.228246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.228267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.232534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.232785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.232805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.237032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.237287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.237310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.517 [2024-11-20 10:43:34.241789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.517 [2024-11-20 10:43:34.242054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.517 [2024-11-20 10:43:34.242075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.246926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.247177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.247198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.252228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.252480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.252501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.257547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.257811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.257832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.263041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.263120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.263140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.269746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.270081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.270103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.276777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.277040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.277062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.283647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.283899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.283924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.288788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.289045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.289065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.293384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.293643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.293664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.298018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.298279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.298300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.302516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.302768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.302789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.307059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.307310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.307332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.312020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.312273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.312294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.317060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.317312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.317333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.322072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.322154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.322173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.327382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.327642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.327664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.332380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.332635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.332656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.337247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.777 [2024-11-20 10:43:34.337505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.777 [2024-11-20 10:43:34.337527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.777 [2024-11-20 10:43:34.342124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.342385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.342406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.347153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.347408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.347430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.352576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.352831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.352853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.357977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.358235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.358257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.362826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.363086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.363107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.368002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.368253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.368274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.374826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.375093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.375115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.380443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.380704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.380724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.386181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.386438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.386461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.392203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.392463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.398061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.398137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.398157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.405104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.405352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.405374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.411779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.412072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.412109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.418964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.419333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.419354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.425387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.425685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.425710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.432631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.432912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.432933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.439339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.439588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.439609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.445961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.446245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.446266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.453258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.453558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.453579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.460232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.460515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.460535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.466455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.466775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.466795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.473822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.474089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.474110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.480467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.480753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.480774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.487046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.487316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.487338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.493639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.493937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.493967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.778 [2024-11-20 10:43:34.500115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:33.778 [2024-11-20 10:43:34.500439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.778 [2024-11-20 10:43:34.500460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.506856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.507074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.507094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.513202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.513517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.513539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.520068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.520181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.520200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.526584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.526765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.526785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.533352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.533545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.533563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.540609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.540795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.540813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.546619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.546693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.546712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.552096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.552150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.552169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.557816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.557930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.557955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.562108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.562160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.562178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.566204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.566265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.566283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.570320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.570397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.570416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.575064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.575240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.575260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.581195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.581346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.581366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.586252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.586342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.586365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.590661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.590744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.590763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.595338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.595461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.595479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.599844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.039 [2024-11-20 10:43:34.599918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.039 [2024-11-20 10:43:34.599937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.039 [2024-11-20 10:43:34.604389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.604506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.604525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.609042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.609165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.609184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.613657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.613776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.613795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.619577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.619752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.619771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.625642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.625745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.625763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.631649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.631706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.631724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.637376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.637429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.637447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.642788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.642852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.642870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.647611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.647667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.647686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.653168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.653297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.653316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.659273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.659342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.659361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.664492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.664550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.664569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.669910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.669981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.670000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.674792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.674853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.674871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.679277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.679351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.683453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.683540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.683559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.687936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.687994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.688013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.692423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.692476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.692494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.696913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.696998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.702099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.702215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.706946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.707013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.707032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.711058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.711114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.711132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.715140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.715190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.715211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.719223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.719276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.719294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.723324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.723393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.040 [2024-11-20 10:43:34.723412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.040 [2024-11-20 10:43:34.727162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.040 [2024-11-20 10:43:34.727243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.041 [2024-11-20 10:43:34.727262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.041 [2024-11-20 10:43:34.731016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.041 [2024-11-20 10:43:34.731162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.041 [2024-11-20 10:43:34.731179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.041 [2024-11-20 10:43:34.736121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.041 [2024-11-20 10:43:34.736188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.041 [2024-11-20 10:43:34.736206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.041 [2024-11-20 10:43:34.741347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.041 [2024-11-20 10:43:34.741462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.041 [2024-11-20 10:43:34.741481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.041 [2024-11-20 10:43:34.746906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.041 [2024-11-20 10:43:34.747061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.041 [2024-11-20 10:43:34.747080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.041 [2024-11-20 10:43:34.753191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfecb20) with pdu=0x2000166ff3c8 00:26:34.041 [2024-11-20 10:43:34.753342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.041 [2024-11-20 10:43:34.753361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.041 6236.00 IOPS, 779.50 MiB/s 00:26:34.041 Latency(us) 00:26:34.041 [2024-11-20T09:43:34.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.041 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:34.041 nvme0n1 : 2.00 6232.00 779.00 0.00 0.00 2562.56 1823.61 7208.96 00:26:34.041 [2024-11-20T09:43:34.772Z] =================================================================================================================== 00:26:34.041 [2024-11-20T09:43:34.772Z] Total : 6232.00 779.00 0.00 0.00 2562.56 1823.61 7208.96 00:26:34.041 { 00:26:34.041 "results": [ 00:26:34.041 { 00:26:34.041 "job": "nvme0n1", 00:26:34.041 "core_mask": "0x2", 00:26:34.041 "workload": "randwrite", 00:26:34.041 "status": "finished", 00:26:34.041 "queue_depth": 16, 00:26:34.041 "io_size": 131072, 00:26:34.041 "runtime": 2.004331, 00:26:34.041 "iops": 6232.004594051581, 00:26:34.041 "mibps": 779.0005742564476, 00:26:34.041 "io_failed": 0, 00:26:34.041 "io_timeout": 0, 00:26:34.041 "avg_latency_us": 2562.558244579576, 00:26:34.041 "min_latency_us": 1823.6104347826088, 00:26:34.041 "max_latency_us": 7208.96 00:26:34.041 } 00:26:34.041 ], 00:26:34.041 "core_count": 1 00:26:34.041 } 00:26:34.299 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:34.299 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:34.299 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:34.299 | .driver_specific 00:26:34.299 | .nvme_error 00:26:34.299 | .status_code 00:26:34.300 | .command_transient_transport_error' 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 403 > 0 )) 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3635216 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3635216 ']' 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3635216 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.300 10:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3635216 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3635216' 00:26:34.559 killing process with pid 3635216 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3635216 00:26:34.559 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.559 00:26:34.559 Latency(us) 00:26:34.559 [2024-11-20T09:43:35.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.559 [2024-11-20T09:43:35.290Z] =================================================================================================================== 00:26:34.559 [2024-11-20T09:43:35.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3635216 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3633456 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3633456 ']' 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3633456 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3633456 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3633456' 00:26:34.559 killing process with pid 3633456 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3633456 00:26:34.559 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3633456 00:26:34.818 00:26:34.818 real 0m13.906s 00:26:34.818 user 0m26.658s 00:26:34.818 sys 0m4.565s 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.818 ************************************ 00:26:34.818 END TEST nvmf_digest_error 00:26:34.818 ************************************ 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.818 rmmod nvme_tcp 00:26:34.818 rmmod nvme_fabrics 00:26:34.818 rmmod nvme_keyring 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3633456 ']' 00:26:34.818 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3633456 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3633456 ']' 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3633456 00:26:34.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3633456) - No such process 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3633456 is not found' 00:26:34.819 Process with pid 3633456 is not found 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.819 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.355 00:26:37.355 real 0m36.225s 00:26:37.355 user 0m55.130s 00:26:37.355 sys 0m13.765s 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.355 ************************************ 00:26:37.355 END TEST nvmf_digest 00:26:37.355 ************************************ 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.355 ************************************ 00:26:37.355 START TEST nvmf_bdevperf 00:26:37.355 ************************************ 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:37.355 * Looking for test storage... 00:26:37.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.355 --rc genhtml_branch_coverage=1 00:26:37.355 --rc genhtml_function_coverage=1 00:26:37.355 --rc genhtml_legend=1 00:26:37.355 --rc geninfo_all_blocks=1 00:26:37.355 --rc geninfo_unexecuted_blocks=1 00:26:37.355 00:26:37.355 ' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.355 --rc genhtml_branch_coverage=1 00:26:37.355 --rc genhtml_function_coverage=1 00:26:37.355 --rc genhtml_legend=1 00:26:37.355 --rc geninfo_all_blocks=1 00:26:37.355 --rc geninfo_unexecuted_blocks=1 00:26:37.355 00:26:37.355 ' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.355 --rc genhtml_branch_coverage=1 00:26:37.355 --rc genhtml_function_coverage=1 00:26:37.355 --rc genhtml_legend=1 00:26:37.355 --rc geninfo_all_blocks=1 00:26:37.355 --rc geninfo_unexecuted_blocks=1 00:26:37.355 00:26:37.355 ' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.355 --rc genhtml_branch_coverage=1 00:26:37.355 --rc genhtml_function_coverage=1 00:26:37.355 --rc genhtml_legend=1 00:26:37.355 --rc geninfo_all_blocks=1 00:26:37.355 --rc geninfo_unexecuted_blocks=1 00:26:37.355 00:26:37.355 ' 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.355 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.356 10:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.923 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:43.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:43.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:43.924 Found net devices under 0000:86:00.0: cvl_0_0 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:43.924 Found net devices under 0000:86:00.1: cvl_0_1 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:43.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:26:43.924 00:26:43.924 --- 10.0.0.2 ping statistics --- 00:26:43.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.924 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:26:43.924 00:26:43.924 --- 10.0.0.1 ping statistics --- 00:26:43.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.924 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3639225 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3639225 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3639225 ']' 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.924 10:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.924 [2024-11-20 10:43:43.850088] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:43.924 [2024-11-20 10:43:43.850141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.924 [2024-11-20 10:43:43.927824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.924 [2024-11-20 10:43:43.970828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.924 [2024-11-20 10:43:43.970864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.925 [2024-11-20 10:43:43.970872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.925 [2024-11-20 10:43:43.970878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.925 [2024-11-20 10:43:43.970883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.925 [2024-11-20 10:43:43.972239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.925 [2024-11-20 10:43:43.972345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.925 [2024-11-20 10:43:43.972347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 [2024-11-20 10:43:44.108800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 Malloc0 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 [2024-11-20 10:43:44.170154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.925 { 00:26:43.925 "params": { 00:26:43.925 "name": "Nvme$subsystem", 00:26:43.925 "trtype": "$TEST_TRANSPORT", 00:26:43.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.925 "adrfam": "ipv4", 00:26:43.925 "trsvcid": "$NVMF_PORT", 00:26:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.925 "hdgst": ${hdgst:-false}, 00:26:43.925 "ddgst": ${ddgst:-false} 00:26:43.925 }, 00:26:43.925 "method": "bdev_nvme_attach_controller" 00:26:43.925 } 00:26:43.925 EOF 00:26:43.925 )") 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:43.925 10:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:43.925 "params": { 00:26:43.925 "name": "Nvme1", 00:26:43.925 "trtype": "tcp", 00:26:43.925 "traddr": "10.0.0.2", 00:26:43.925 "adrfam": "ipv4", 00:26:43.925 "trsvcid": "4420", 00:26:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.925 "hdgst": false, 00:26:43.925 "ddgst": false 00:26:43.925 }, 00:26:43.925 "method": "bdev_nvme_attach_controller" 00:26:43.925 }' 00:26:43.925 [2024-11-20 10:43:44.222634] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:43.925 [2024-11-20 10:43:44.222675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639247 ] 00:26:43.925 [2024-11-20 10:43:44.300200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.925 [2024-11-20 10:43:44.341812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.925 Running I/O for 1 seconds... 00:26:45.303 11083.00 IOPS, 43.29 MiB/s 00:26:45.303 Latency(us) 00:26:45.303 [2024-11-20T09:43:46.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:45.303 Verification LBA range: start 0x0 length 0x4000 00:26:45.303 Nvme1n1 : 1.00 11163.51 43.61 0.00 0.00 11421.20 2550.21 11226.60 00:26:45.303 [2024-11-20T09:43:46.034Z] =================================================================================================================== 00:26:45.303 [2024-11-20T09:43:46.034Z] Total : 11163.51 43.61 0.00 0.00 11421.20 2550.21 11226.60 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3639491 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.303 { 00:26:45.303 "params": { 00:26:45.303 "name": "Nvme$subsystem", 00:26:45.303 "trtype": "$TEST_TRANSPORT", 00:26:45.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.303 "adrfam": "ipv4", 00:26:45.303 "trsvcid": "$NVMF_PORT", 00:26:45.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.303 "hdgst": ${hdgst:-false}, 00:26:45.303 "ddgst": ${ddgst:-false} 00:26:45.303 }, 00:26:45.303 "method": "bdev_nvme_attach_controller" 00:26:45.303 } 00:26:45.303 EOF 00:26:45.303 )") 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:45.303 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:45.304 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:45.304 "params": { 00:26:45.304 "name": "Nvme1", 00:26:45.304 "trtype": "tcp", 00:26:45.304 "traddr": "10.0.0.2", 00:26:45.304 "adrfam": "ipv4", 00:26:45.304 "trsvcid": "4420", 00:26:45.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.304 "hdgst": false, 00:26:45.304 "ddgst": false 00:26:45.304 }, 00:26:45.304 "method": "bdev_nvme_attach_controller" 00:26:45.304 }' 00:26:45.304 [2024-11-20 10:43:45.830363] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:45.304 [2024-11-20 10:43:45.830411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639491 ] 00:26:45.304 [2024-11-20 10:43:45.907840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.304 [2024-11-20 10:43:45.946479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.562 Running I/O for 15 seconds... 00:26:47.877 11047.00 IOPS, 43.15 MiB/s [2024-11-20T09:43:48.868Z] 11096.00 IOPS, 43.34 MiB/s [2024-11-20T09:43:48.868Z] 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3639225 00:26:48.137 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:48.137 [2024-11-20 10:43:48.804163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.137 [2024-11-20 10:43:48.804208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.137 [2024-11-20 10:43:48.804233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.137 [2024-11-20 10:43:48.804252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.137 [2024-11-20 10:43:48.804274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.137 [2024-11-20 10:43:48.804292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.137 [2024-11-20 10:43:48.804563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.137 [2024-11-20 10:43:48.804574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.137 [2024-11-20 10:43:48.804583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.804794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.804811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.804826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.804841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.804856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.804870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.804932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.804941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.138 [2024-11-20 10:43:48.805231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.138 [2024-11-20 10:43:48.805245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.138 [2024-11-20 10:43:48.805253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.139 [2024-11-20 10:43:48.805786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.139 [2024-11-20 10:43:48.805794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.805973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.805994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.140 [2024-11-20 10:43:48.806332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.806340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321cf0 is same with the state(6) to be set 00:26:48.140 [2024-11-20 10:43:48.806349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.140 [2024-11-20 10:43:48.806354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.140 [2024-11-20 10:43:48.806360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93872 len:8 PRP1 0x0 PRP2 0x0 00:26:48.140 [2024-11-20 10:43:48.806366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.140 [2024-11-20 10:43:48.809266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.140 [2024-11-20 10:43:48.809321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.141 [2024-11-20 10:43:48.809834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.141 [2024-11-20 10:43:48.809851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.141 [2024-11-20 10:43:48.809860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.141 [2024-11-20 10:43:48.810044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.141 [2024-11-20 10:43:48.810223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.141 [2024-11-20 10:43:48.810232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.141 [2024-11-20 10:43:48.810240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.141 [2024-11-20 10:43:48.810248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.141 [2024-11-20 10:43:48.822560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.141 [2024-11-20 10:43:48.822957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.141 [2024-11-20 10:43:48.822977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.141 [2024-11-20 10:43:48.822986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.141 [2024-11-20 10:43:48.823165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.141 [2024-11-20 10:43:48.823344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.141 [2024-11-20 10:43:48.823354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.141 [2024-11-20 10:43:48.823362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.141 [2024-11-20 10:43:48.823370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.141 [2024-11-20 10:43:48.835693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.141 [2024-11-20 10:43:48.836128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.141 [2024-11-20 10:43:48.836148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.141 [2024-11-20 10:43:48.836157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.141 [2024-11-20 10:43:48.836336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.141 [2024-11-20 10:43:48.836515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.141 [2024-11-20 10:43:48.836525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.141 [2024-11-20 10:43:48.836532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.141 [2024-11-20 10:43:48.836539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.141 [2024-11-20 10:43:48.848842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.141 [2024-11-20 10:43:48.849290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.141 [2024-11-20 10:43:48.849308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.141 [2024-11-20 10:43:48.849317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.141 [2024-11-20 10:43:48.849494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.141 [2024-11-20 10:43:48.849672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.141 [2024-11-20 10:43:48.849682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.141 [2024-11-20 10:43:48.849689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.141 [2024-11-20 10:43:48.849695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.141 [2024-11-20 10:43:48.862017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.141 [2024-11-20 10:43:48.862454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.141 [2024-11-20 10:43:48.862473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.141 [2024-11-20 10:43:48.862481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.141 [2024-11-20 10:43:48.862658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.141 [2024-11-20 10:43:48.862835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.141 [2024-11-20 10:43:48.862845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.141 [2024-11-20 10:43:48.862852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.141 [2024-11-20 10:43:48.862859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.401 [2024-11-20 10:43:48.875218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.401 [2024-11-20 10:43:48.875660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.401 [2024-11-20 10:43:48.875678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.401 [2024-11-20 10:43:48.875686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.401 [2024-11-20 10:43:48.875862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.401 [2024-11-20 10:43:48.876049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.401 [2024-11-20 10:43:48.876059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.401 [2024-11-20 10:43:48.876067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.401 [2024-11-20 10:43:48.876073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.401 [2024-11-20 10:43:48.888379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.401 [2024-11-20 10:43:48.888746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.401 [2024-11-20 10:43:48.888764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.401 [2024-11-20 10:43:48.888772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.401 [2024-11-20 10:43:48.888960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.401 [2024-11-20 10:43:48.889138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.401 [2024-11-20 10:43:48.889148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.401 [2024-11-20 10:43:48.889155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.401 [2024-11-20 10:43:48.889162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.901469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.901896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.901914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.901922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.902105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.902283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.902294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.902300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.902308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.914627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.915048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.915067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.915075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.915253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.915431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.915441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.915448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.915455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.927772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.928147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.928166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.928174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.928350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.928527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.928540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.928547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.928554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.940873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.941311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.941329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.941337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.941514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.941693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.941703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.941710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.941716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.954022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.954430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.954447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.954456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.954632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.954811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.954820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.954829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.954836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.967159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.967574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.967593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.967601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.967778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.967962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.967973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.967980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.967991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.980339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.980804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.980822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.980830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.981013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.981191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.981201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.981209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.981216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:48.993513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:48.993944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:48.993968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:48.993976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:48.994153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:48.994330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:48.994340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:48.994347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:48.994354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:49.006538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:49.006970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:49.006988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:49.006997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:49.007174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:49.007352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:49.007362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:49.007369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:49.007376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:49.019713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:49.020083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:49.020100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:49.020110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:49.020286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:49.020463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:49.020473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:49.020480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:49.020487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:49.032807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:49.033182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:49.033200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:49.033209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.402 [2024-11-20 10:43:49.033385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.402 [2024-11-20 10:43:49.033563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.402 [2024-11-20 10:43:49.033573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.402 [2024-11-20 10:43:49.033580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.402 [2024-11-20 10:43:49.033586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.402 [2024-11-20 10:43:49.045905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.402 [2024-11-20 10:43:49.046341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.402 [2024-11-20 10:43:49.046359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.402 [2024-11-20 10:43:49.046367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.046544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.046722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.046732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.046741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.046749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.403 [2024-11-20 10:43:49.059062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.403 [2024-11-20 10:43:49.059506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.403 [2024-11-20 10:43:49.059524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.403 [2024-11-20 10:43:49.059533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.059714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.059892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.059903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.059910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.059918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.403 [2024-11-20 10:43:49.072241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.403 [2024-11-20 10:43:49.072613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.403 [2024-11-20 10:43:49.072656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.403 [2024-11-20 10:43:49.072680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.073175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.073354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.073364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.073372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.073381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.403 [2024-11-20 10:43:49.085373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.403 [2024-11-20 10:43:49.085764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.403 [2024-11-20 10:43:49.085782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.403 [2024-11-20 10:43:49.085791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.085974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.086154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.086164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.086171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.086178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.403 [2024-11-20 10:43:49.098374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.403 [2024-11-20 10:43:49.098790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.403 [2024-11-20 10:43:49.098808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.403 [2024-11-20 10:43:49.098816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.098994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.099168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.099180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.099187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.099194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.403 [2024-11-20 10:43:49.111232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.403 [2024-11-20 10:43:49.111626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.403 [2024-11-20 10:43:49.111643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.403 [2024-11-20 10:43:49.111650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.111812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.111981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.111991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.112015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.112023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.403 [2024-11-20 10:43:49.124114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.403 [2024-11-20 10:43:49.124539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.403 [2024-11-20 10:43:49.124556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.403 [2024-11-20 10:43:49.124565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.403 [2024-11-20 10:43:49.124755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.403 [2024-11-20 10:43:49.124934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.403 [2024-11-20 10:43:49.124944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.403 [2024-11-20 10:43:49.124957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.403 [2024-11-20 10:43:49.124964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.137107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.137525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.137569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.137593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.138039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.138203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.138212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.138219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.138230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.149998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.150417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.150434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.150442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.150604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.150767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.150776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.150782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.150788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.162801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.163197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.163213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.163221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.163383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.163546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.163556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.163563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.163569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.175703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.176052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.176071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.176078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.176242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.176404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.176414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.176420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.176427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.188556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.188956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.188973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.188981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.189143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.189306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.189316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.189322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.189329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.201467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.201843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.201850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.202034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.202207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.202217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.202223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.202230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.214367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.214717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.214734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.214741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.214903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.215091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.215101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.215107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.215114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.227214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.227606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.227623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.227631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.663 [2024-11-20 10:43:49.227798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.663 [2024-11-20 10:43:49.227967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.663 [2024-11-20 10:43:49.227977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.663 [2024-11-20 10:43:49.227984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.663 [2024-11-20 10:43:49.227991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.663 [2024-11-20 10:43:49.240073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.663 [2024-11-20 10:43:49.240400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.663 [2024-11-20 10:43:49.240418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.663 [2024-11-20 10:43:49.240425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.240588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.240752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.240762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.240768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.240775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.252898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.253311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.253355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.253378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.253969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.254551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.254593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.254618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.254646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.265793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.266192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.266209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.266216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.266379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.266541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.266553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.266560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.266567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.278699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.279097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.279115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.279123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.279285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.279448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.279457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.279464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.279470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 9334.67 IOPS, 36.46 MiB/s [2024-11-20T09:43:49.395Z] [2024-11-20 10:43:49.291579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.291994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.292034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.292060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.292612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.292777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.292786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.292792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.292799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.304476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.304892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.304935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.304975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.305557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.305731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.305742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.305748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.305761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.317381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.317867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.317910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.317935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.318476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.318864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.318882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.318898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.318913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.332372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.332830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.332873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.332897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.333501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.333757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.333770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.333780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.333790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.345382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.345831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.345875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.345899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.346491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.346990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.347001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.347008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.347016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.358281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.358735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.358779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.358803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.359400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.359871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.359880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.359887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.359894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.371154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.371572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.371589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.371597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.371758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.371921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.371930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.371936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.371943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.664 [2024-11-20 10:43:49.383929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.664 [2024-11-20 10:43:49.384275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.664 [2024-11-20 10:43:49.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.664 [2024-11-20 10:43:49.384341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.664 [2024-11-20 10:43:49.384884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.664 [2024-11-20 10:43:49.385071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.664 [2024-11-20 10:43:49.385082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.664 [2024-11-20 10:43:49.385089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.664 [2024-11-20 10:43:49.385096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.923 [2024-11-20 10:43:49.396914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.923 [2024-11-20 10:43:49.397279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.923 [2024-11-20 10:43:49.397298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.923 [2024-11-20 10:43:49.397306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.923 [2024-11-20 10:43:49.397495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.923 [2024-11-20 10:43:49.397678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.923 [2024-11-20 10:43:49.397687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.923 [2024-11-20 10:43:49.397694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.923 [2024-11-20 10:43:49.397700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.923 [2024-11-20 10:43:49.409713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.923 [2024-11-20 10:43:49.410131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.923 [2024-11-20 10:43:49.410149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.923 [2024-11-20 10:43:49.410156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.923 [2024-11-20 10:43:49.410319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.923 [2024-11-20 10:43:49.410483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.923 [2024-11-20 10:43:49.410492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.923 [2024-11-20 10:43:49.410499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.923 [2024-11-20 10:43:49.410505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.923 [2024-11-20 10:43:49.422539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.923 [2024-11-20 10:43:49.422826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.923 [2024-11-20 10:43:49.422843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.923 [2024-11-20 10:43:49.422851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.923 [2024-11-20 10:43:49.423032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.923 [2024-11-20 10:43:49.423210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.923 [2024-11-20 10:43:49.423220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.923 [2024-11-20 10:43:49.423238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.923 [2024-11-20 10:43:49.423245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.923 [2024-11-20 10:43:49.435416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.923 [2024-11-20 10:43:49.435754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.923 [2024-11-20 10:43:49.435771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.923 [2024-11-20 10:43:49.435779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.923 [2024-11-20 10:43:49.435941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.923 [2024-11-20 10:43:49.436115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.923 [2024-11-20 10:43:49.436128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.923 [2024-11-20 10:43:49.436135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.923 [2024-11-20 10:43:49.436142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.923 [2024-11-20 10:43:49.448382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.923 [2024-11-20 10:43:49.448723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.923 [2024-11-20 10:43:49.448740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.923 [2024-11-20 10:43:49.448747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.923 [2024-11-20 10:43:49.448909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.923 [2024-11-20 10:43:49.449077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.923 [2024-11-20 10:43:49.449087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.923 [2024-11-20 10:43:49.449094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.923 [2024-11-20 10:43:49.449100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.923 [2024-11-20 10:43:49.461317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.923 [2024-11-20 10:43:49.461710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.461727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.461734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.461896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.462087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.462106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.462113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.462121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.474103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.474459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.474476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.474483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.474646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.474808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.474818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.474824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.474834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.486908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.487331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.487348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.487356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.487519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.487682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.487692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.487698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.487705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.499777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.500128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.500144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.500152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.500314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.500476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.500485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.500492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.500498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.512624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.513020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.513039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.513047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.513211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.513374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.513384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.513390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.513398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.525526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.525968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.526012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.526036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.526614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.527205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.527234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.527256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.527275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.538418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.538824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.538868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.538891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.539404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.539578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.539588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.539595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.539602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.924 [2024-11-20 10:43:49.551258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.924 [2024-11-20 10:43:49.551674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.924 [2024-11-20 10:43:49.551691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.924 [2024-11-20 10:43:49.551699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.924 [2024-11-20 10:43:49.551860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.924 [2024-11-20 10:43:49.552047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.924 [2024-11-20 10:43:49.552058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.924 [2024-11-20 10:43:49.552065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.924 [2024-11-20 10:43:49.552071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.564061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.564403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.564428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.564593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.564756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.564765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.564772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.564778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.576967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.577440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.577457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.577466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.577628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.577792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.577802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.577808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.577814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.590169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.590491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.590509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.590517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.590701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.590895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.590905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.590914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.590921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.602991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.603347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.603392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.603416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.604006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.604555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.604567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.604574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.604581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.615871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.616305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.616350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.616375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.616967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.617510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.617520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.617527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.617534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.628754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.629104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.629121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.629128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.629290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.629453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.629463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.629469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.629476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.925 [2024-11-20 10:43:49.641619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.925 [2024-11-20 10:43:49.642046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.925 [2024-11-20 10:43:49.642094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:48.925 [2024-11-20 10:43:49.642118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:48.925 [2024-11-20 10:43:49.642485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:48.925 [2024-11-20 10:43:49.642649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.925 [2024-11-20 10:43:49.642659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.925 [2024-11-20 10:43:49.642665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.925 [2024-11-20 10:43:49.642675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.185 [2024-11-20 10:43:49.654584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.185 [2024-11-20 10:43:49.655009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.185 [2024-11-20 10:43:49.655027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.185 [2024-11-20 10:43:49.655036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.185 [2024-11-20 10:43:49.655215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.185 [2024-11-20 10:43:49.655379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.185 [2024-11-20 10:43:49.655388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.185 [2024-11-20 10:43:49.655394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.185 [2024-11-20 10:43:49.655400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.185 [2024-11-20 10:43:49.667401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.185 [2024-11-20 10:43:49.667802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.185 [2024-11-20 10:43:49.667820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.185 [2024-11-20 10:43:49.667828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.185 [2024-11-20 10:43:49.667996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.185 [2024-11-20 10:43:49.668185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.185 [2024-11-20 10:43:49.668195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.185 [2024-11-20 10:43:49.668202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.185 [2024-11-20 10:43:49.668209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.185 [2024-11-20 10:43:49.680313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.185 [2024-11-20 10:43:49.680640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.185 [2024-11-20 10:43:49.680659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.185 [2024-11-20 10:43:49.680667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.185 [2024-11-20 10:43:49.680830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.185 [2024-11-20 10:43:49.680999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.681009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.681016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.681023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.693213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.693641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.693658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.693666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.693829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.694014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.694024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.694031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.694038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.706087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.706480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.706497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.706504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.706666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.706828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.706837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.706844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.706851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.719088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.719483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.719500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.719508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.719671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.719834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.719844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.719851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.719857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.732022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.732365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.732382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.732390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.732556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.732719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.732728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.732734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.732741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.745358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.745785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.745831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.745855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.746455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.746619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.746629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.746636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.746642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.758373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.758705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.758723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.758731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.758910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.759078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.759088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.759094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.759102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.771176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.771582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.771626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.771649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.772245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.772791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.772804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.772812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.772818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.784075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.784506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.186 [2024-11-20 10:43:49.784523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.186 [2024-11-20 10:43:49.784530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.186 [2024-11-20 10:43:49.784693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.186 [2024-11-20 10:43:49.784856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.186 [2024-11-20 10:43:49.784865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.186 [2024-11-20 10:43:49.784872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.186 [2024-11-20 10:43:49.784878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.186 [2024-11-20 10:43:49.796895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.186 [2024-11-20 10:43:49.797327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.797371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.797394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.797890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.798077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.798087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.798094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.798102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.809786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.810208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.810224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.810232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.810394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.810557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.810566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.810573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.810583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.822604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.823017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.823036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.823044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.823217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.823389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.823399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.823405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.823412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.835386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.835837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.835881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.835906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.836501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.836996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.837007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.837014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.837021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.848469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.848893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.848911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.848920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.849100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.849273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.849282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.849289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.849296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.861466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.861916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.861973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.862000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.862578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.863125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.863135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.863143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.863150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.874422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.874851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.874895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.874919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.875342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.875517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.875527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.875534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.875541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.887426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.887852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.887869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.887877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.888056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.888228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.187 [2024-11-20 10:43:49.888238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.187 [2024-11-20 10:43:49.888245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.187 [2024-11-20 10:43:49.888252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.187 [2024-11-20 10:43:49.900412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.187 [2024-11-20 10:43:49.900836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.187 [2024-11-20 10:43:49.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.187 [2024-11-20 10:43:49.900860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.187 [2024-11-20 10:43:49.901032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.187 [2024-11-20 10:43:49.901196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.188 [2024-11-20 10:43:49.901207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.188 [2024-11-20 10:43:49.901213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.188 [2024-11-20 10:43:49.901219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.188 [2024-11-20 10:43:49.913484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.447 [2024-11-20 10:43:49.913917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.447 [2024-11-20 10:43:49.913934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.447 [2024-11-20 10:43:49.913943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.447 [2024-11-20 10:43:49.914126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.447 [2024-11-20 10:43:49.914304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.447 [2024-11-20 10:43:49.914314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.447 [2024-11-20 10:43:49.914321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.447 [2024-11-20 10:43:49.914328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.447 [2024-11-20 10:43:49.926317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.447 [2024-11-20 10:43:49.926723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.447 [2024-11-20 10:43:49.926766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.447 [2024-11-20 10:43:49.926790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.447 [2024-11-20 10:43:49.927297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.447 [2024-11-20 10:43:49.927471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.447 [2024-11-20 10:43:49.927480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.447 [2024-11-20 10:43:49.927487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.447 [2024-11-20 10:43:49.927493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.447 [2024-11-20 10:43:49.939226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:49.939566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:49.939583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:49.939591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:49.939754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:49.939917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:49.939932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:49.939938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:49.939945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:49.952063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:49.952511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:49.952556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:49.952580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:49.953092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:49.953266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:49.953276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:49.953283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:49.953290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:49.964918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:49.965341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:49.965359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:49.965367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:49.965529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:49.965693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:49.965703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:49.965709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:49.965715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:49.977725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:49.978155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:49.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:49.978225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:49.978592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:49.978756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:49.978766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:49.978773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:49.978782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:49.990516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:49.990970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:49.991016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:49.991039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:49.991506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:49.991670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:49.991679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:49.991686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:49.991692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:50.003558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:50.003992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:50.004027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:50.004037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:50.004220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:50.004403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:50.004413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:50.004420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:50.004427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:50.016694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:50.017123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:50.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:50.017151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:50.017328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:50.017507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:50.017517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:50.017524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:50.017530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:50.030095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:50.030513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:50.030531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:50.030540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:50.030735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:50.030932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:50.030943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:50.030963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:50.030971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:50.043102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:50.043535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:50.043553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:50.043561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:50.043732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:50.043904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:50.043913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:50.043920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.448 [2024-11-20 10:43:50.043927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.448 [2024-11-20 10:43:50.056051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.448 [2024-11-20 10:43:50.056416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.448 [2024-11-20 10:43:50.056434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.448 [2024-11-20 10:43:50.056442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.448 [2024-11-20 10:43:50.056614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.448 [2024-11-20 10:43:50.056786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.448 [2024-11-20 10:43:50.056796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.448 [2024-11-20 10:43:50.056803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.056809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.069194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.069559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.069576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.069584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.069764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.069942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.069960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.069968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.069976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.082297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.082731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.082750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.082759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.082932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.083110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.083120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.083127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.083133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.095424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.095760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.095779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.095788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.095973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.096152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.096164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.096171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.096179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.108509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.108816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.108834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.108843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.109027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.109206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.109219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.109226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.109233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.121598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.121982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.122001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.122010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.122182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.122355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.122365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.122371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.122378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.134773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.135223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.135242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.135250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.135428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.135606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.135616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.135624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.135631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.147767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.148160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.148178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.148187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.148358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.148530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.148539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.148546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.148558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.160893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.161182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.161199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.161208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.161379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.161551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.161561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.161568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.161576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.449 [2024-11-20 10:43:50.173956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.449 [2024-11-20 10:43:50.174249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.449 [2024-11-20 10:43:50.174266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.449 [2024-11-20 10:43:50.174274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.449 [2024-11-20 10:43:50.174451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.449 [2024-11-20 10:43:50.174628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.449 [2024-11-20 10:43:50.174639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.449 [2024-11-20 10:43:50.174646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.449 [2024-11-20 10:43:50.174653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.709 [2024-11-20 10:43:50.186977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.709 [2024-11-20 10:43:50.187267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.709 [2024-11-20 10:43:50.187285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.709 [2024-11-20 10:43:50.187292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.709 [2024-11-20 10:43:50.187464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.709 [2024-11-20 10:43:50.187636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.709 [2024-11-20 10:43:50.187646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.709 [2024-11-20 10:43:50.187652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.709 [2024-11-20 10:43:50.187659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.709 [2024-11-20 10:43:50.199990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.709 [2024-11-20 10:43:50.200276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.709 [2024-11-20 10:43:50.200294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.709 [2024-11-20 10:43:50.200302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.709 [2024-11-20 10:43:50.200474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.709 [2024-11-20 10:43:50.200647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.709 [2024-11-20 10:43:50.200657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.709 [2024-11-20 10:43:50.200664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.709 [2024-11-20 10:43:50.200670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.709 [2024-11-20 10:43:50.212955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.709 [2024-11-20 10:43:50.213290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.709 [2024-11-20 10:43:50.213330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.709 [2024-11-20 10:43:50.213356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.709 [2024-11-20 10:43:50.213936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.709 [2024-11-20 10:43:50.214118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.709 [2024-11-20 10:43:50.214128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.709 [2024-11-20 10:43:50.214135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.709 [2024-11-20 10:43:50.214142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.709 [2024-11-20 10:43:50.225897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.709 [2024-11-20 10:43:50.226234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.709 [2024-11-20 10:43:50.226263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.709 [2024-11-20 10:43:50.226271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.709 [2024-11-20 10:43:50.226433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.709 [2024-11-20 10:43:50.226597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.709 [2024-11-20 10:43:50.226606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.709 [2024-11-20 10:43:50.226613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.709 [2024-11-20 10:43:50.226619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.709 [2024-11-20 10:43:50.238828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.709 [2024-11-20 10:43:50.239175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.709 [2024-11-20 10:43:50.239193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.709 [2024-11-20 10:43:50.239201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.709 [2024-11-20 10:43:50.239377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.239550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.239561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.239567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.239574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.251613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.251960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.251977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.251985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.252149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.252312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.252322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.252328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.252335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.264706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.265054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.265072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.265080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.265253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.265430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.265439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.265446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.265452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.277700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.278109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.278156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.278180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.278759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.278980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.278993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.279000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.279009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 7001.00 IOPS, 27.35 MiB/s [2024-11-20T09:43:50.441Z] [2024-11-20 10:43:50.294667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.295122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.295146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.295157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.295410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.295665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.295678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.295688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.295698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.307752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.308117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.308162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.308187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.308763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.309192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.309202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.309209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.309216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.322515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.323008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.323029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.323039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.323272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.323507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.323519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.323529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.323543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.335498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.335903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.335921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.335929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.336107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.336281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.336290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.336297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.336304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.348595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.710 [2024-11-20 10:43:50.349048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.710 [2024-11-20 10:43:50.349066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.710 [2024-11-20 10:43:50.349074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.710 [2024-11-20 10:43:50.349245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.710 [2024-11-20 10:43:50.349417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.710 [2024-11-20 10:43:50.349427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.710 [2024-11-20 10:43:50.349434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.710 [2024-11-20 10:43:50.349441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.710 [2024-11-20 10:43:50.361686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.711 [2024-11-20 10:43:50.362107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.711 [2024-11-20 10:43:50.362125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.711 [2024-11-20 10:43:50.362134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.711 [2024-11-20 10:43:50.362310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.711 [2024-11-20 10:43:50.362487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.711 [2024-11-20 10:43:50.362498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.711 [2024-11-20 10:43:50.362505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.711 [2024-11-20 10:43:50.362511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.711 [2024-11-20 10:43:50.374668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.711 [2024-11-20 10:43:50.375099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.711 [2024-11-20 10:43:50.375143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.711 [2024-11-20 10:43:50.375166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.711 [2024-11-20 10:43:50.375742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.711 [2024-11-20 10:43:50.376149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.711 [2024-11-20 10:43:50.376160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.711 [2024-11-20 10:43:50.376167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.711 [2024-11-20 10:43:50.376174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.711 [2024-11-20 10:43:50.387574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.711 [2024-11-20 10:43:50.387951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.711 [2024-11-20 10:43:50.387969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.711 [2024-11-20 10:43:50.387977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.711 [2024-11-20 10:43:50.388139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.711 [2024-11-20 10:43:50.388302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.711 [2024-11-20 10:43:50.388328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.711 [2024-11-20 10:43:50.388335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.711 [2024-11-20 10:43:50.388342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.711 [2024-11-20 10:43:50.400604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.711 [2024-11-20 10:43:50.401028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.711 [2024-11-20 10:43:50.401046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.711 [2024-11-20 10:43:50.401055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.711 [2024-11-20 10:43:50.401227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.711 [2024-11-20 10:43:50.401401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.711 [2024-11-20 10:43:50.401411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.711 [2024-11-20 10:43:50.401418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.711 [2024-11-20 10:43:50.401424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.711 [2024-11-20 10:43:50.413661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.711 [2024-11-20 10:43:50.414049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.711 [2024-11-20 10:43:50.414066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.711 [2024-11-20 10:43:50.414074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.711 [2024-11-20 10:43:50.414251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.711 [2024-11-20 10:43:50.414424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.711 [2024-11-20 10:43:50.414434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.711 [2024-11-20 10:43:50.414441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.711 [2024-11-20 10:43:50.414448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.711 [2024-11-20 10:43:50.426727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.711 [2024-11-20 10:43:50.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.711 [2024-11-20 10:43:50.427216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.711 [2024-11-20 10:43:50.427240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.711 [2024-11-20 10:43:50.427816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.711 [2024-11-20 10:43:50.428408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.711 [2024-11-20 10:43:50.428435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.711 [2024-11-20 10:43:50.428441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.711 [2024-11-20 10:43:50.428448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.971 [2024-11-20 10:43:50.439812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.971 [2024-11-20 10:43:50.440171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.971 [2024-11-20 10:43:50.440189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.971 [2024-11-20 10:43:50.440198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.971 [2024-11-20 10:43:50.440369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.971 [2024-11-20 10:43:50.440542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.971 [2024-11-20 10:43:50.440552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.971 [2024-11-20 10:43:50.440559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.971 [2024-11-20 10:43:50.440566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.971 [2024-11-20 10:43:50.452803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.971 [2024-11-20 10:43:50.453147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.971 [2024-11-20 10:43:50.453165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.971 [2024-11-20 10:43:50.453173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.971 [2024-11-20 10:43:50.453344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.453515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.453529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.453539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.453546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.465818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.466154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.466172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.466180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.466352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.466524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.466533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.466540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.466547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.478845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.479224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.479242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.479251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.479422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.479595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.479605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.479611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.479618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.491902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.492197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.492216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.492224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.492396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.492569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.492579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.492586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.492595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.504914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.505202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.505219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.505228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.505398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.505570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.505580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.505587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.505595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.517864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.518279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.518323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.518347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.518833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.519012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.519023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.519029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.519036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.530807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.531121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.531166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.531190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.531750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.531923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.531933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.531939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.531954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.543885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.544296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.544313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.544321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.544493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.544666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.544676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.544683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.544690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.556720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.557135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.557153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.557160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.557323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.557486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.557495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.557502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.557508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.569752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.570184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.570201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.972 [2024-11-20 10:43:50.570210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.972 [2024-11-20 10:43:50.570381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.972 [2024-11-20 10:43:50.570553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.972 [2024-11-20 10:43:50.570562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.972 [2024-11-20 10:43:50.570569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.972 [2024-11-20 10:43:50.570576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.972 [2024-11-20 10:43:50.582665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.972 [2024-11-20 10:43:50.583095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.972 [2024-11-20 10:43:50.583113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.583122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.583300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.583473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.583483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.583489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.583495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.595676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.596097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.596152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.596176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.596723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.596886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.596896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.596903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.596909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.608484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.608944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.609004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.609028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.609571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.609744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.609754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.609761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.609768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.621570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.621974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.621992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.622001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.622173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.622346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.622358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.622366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.622372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.634462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.634822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.634838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.634846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.635014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.635178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.635187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.635193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.635200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.647247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.647665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.647709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.647733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.648226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.648391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.648399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.648406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.648411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.660167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.660589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.660658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.661208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.661373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.661381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.661387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.661397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.672972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.673389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.673406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.673414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.673577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.673740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.673750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.673757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.673763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.685790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.686219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.686265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.686290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.686731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.686895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.686904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.686911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.686918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-11-20 10:43:50.698861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 [2024-11-20 10:43:50.699217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-11-20 10:43:50.699235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:49.973 [2024-11-20 10:43:50.699244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:49.973 [2024-11-20 10:43:50.699420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:49.973 [2024-11-20 10:43:50.699598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-11-20 10:43:50.699608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-11-20 10:43:50.699615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-11-20 10:43:50.699622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.233 [2024-11-20 10:43:50.711917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.233 [2024-11-20 10:43:50.712351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.233 [2024-11-20 10:43:50.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.233 [2024-11-20 10:43:50.712376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.233 [2024-11-20 10:43:50.712539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.233 [2024-11-20 10:43:50.712702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.233 [2024-11-20 10:43:50.712711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.233 [2024-11-20 10:43:50.712718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.233 [2024-11-20 10:43:50.712724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.233 [2024-11-20 10:43:50.724849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.233 [2024-11-20 10:43:50.725278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.233 [2024-11-20 10:43:50.725323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.233 [2024-11-20 10:43:50.725347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.233 [2024-11-20 10:43:50.725822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.233 [2024-11-20 10:43:50.725993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.233 [2024-11-20 10:43:50.726002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.233 [2024-11-20 10:43:50.726009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.233 [2024-11-20 10:43:50.726016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.233 [2024-11-20 10:43:50.737744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.233 [2024-11-20 10:43:50.738163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.233 [2024-11-20 10:43:50.738181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.233 [2024-11-20 10:43:50.738207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.233 [2024-11-20 10:43:50.738786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.233 [2024-11-20 10:43:50.739293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.233 [2024-11-20 10:43:50.739303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.233 [2024-11-20 10:43:50.739310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.233 [2024-11-20 10:43:50.739317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.233 [2024-11-20 10:43:50.750554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.233 [2024-11-20 10:43:50.750886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.233 [2024-11-20 10:43:50.750903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.233 [2024-11-20 10:43:50.750911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.233 [2024-11-20 10:43:50.751081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.233 [2024-11-20 10:43:50.751245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.233 [2024-11-20 10:43:50.751255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.233 [2024-11-20 10:43:50.751261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.233 [2024-11-20 10:43:50.751267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.233 [2024-11-20 10:43:50.763421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.233 [2024-11-20 10:43:50.763765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.233 [2024-11-20 10:43:50.763782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.233 [2024-11-20 10:43:50.763790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.233 [2024-11-20 10:43:50.763956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.233 [2024-11-20 10:43:50.764121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.233 [2024-11-20 10:43:50.764131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.233 [2024-11-20 10:43:50.764137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.233 [2024-11-20 10:43:50.764144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.233 [2024-11-20 10:43:50.776351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.233 [2024-11-20 10:43:50.776776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.233 [2024-11-20 10:43:50.776792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.233 [2024-11-20 10:43:50.776800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.233 [2024-11-20 10:43:50.776967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.233 [2024-11-20 10:43:50.777131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.777140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.777147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.777154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.789215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.789558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.789575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.789582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.789744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.789907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.789921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.789927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.789934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.802079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.802491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.802528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.802554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.803139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.803304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.803313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.803320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.803327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.814978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.815425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.815474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.815497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.816092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.816279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.816288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.816294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.816301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.827886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.828282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.828301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.828309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.828472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.828635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.828645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.828652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.828663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.840731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.841161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.841179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.841186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.841347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.841511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.841520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.841526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.841532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.853519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.853934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.853959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.853967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.854130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.854292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.854302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.854308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.854315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.866410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.866894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.866911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.866919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.867088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.867253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.867262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.867272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.867279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.879456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.879868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.879885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.879893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.880078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.880252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.234 [2024-11-20 10:43:50.880262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.234 [2024-11-20 10:43:50.880268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.234 [2024-11-20 10:43:50.880275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.234 [2024-11-20 10:43:50.892363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.234 [2024-11-20 10:43:50.892756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.234 [2024-11-20 10:43:50.892773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.234 [2024-11-20 10:43:50.892780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.234 [2024-11-20 10:43:50.892943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.234 [2024-11-20 10:43:50.893118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.235 [2024-11-20 10:43:50.893129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.235 [2024-11-20 10:43:50.893135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.235 [2024-11-20 10:43:50.893143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.235 [2024-11-20 10:43:50.905279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.235 [2024-11-20 10:43:50.905696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.235 [2024-11-20 10:43:50.905741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.235 [2024-11-20 10:43:50.905765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.235 [2024-11-20 10:43:50.906350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.235 [2024-11-20 10:43:50.906516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.235 [2024-11-20 10:43:50.906525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.235 [2024-11-20 10:43:50.906532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.235 [2024-11-20 10:43:50.906538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.235 [2024-11-20 10:43:50.918086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.235 [2024-11-20 10:43:50.918448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.235 [2024-11-20 10:43:50.918466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.235 [2024-11-20 10:43:50.918474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.235 [2024-11-20 10:43:50.918649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.235 [2024-11-20 10:43:50.918824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.235 [2024-11-20 10:43:50.918833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.235 [2024-11-20 10:43:50.918840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.235 [2024-11-20 10:43:50.918846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.235 [2024-11-20 10:43:50.930918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.235 [2024-11-20 10:43:50.931330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.235 [2024-11-20 10:43:50.931376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.235 [2024-11-20 10:43:50.931401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.235 [2024-11-20 10:43:50.931814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.235 [2024-11-20 10:43:50.931983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.235 [2024-11-20 10:43:50.931992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.235 [2024-11-20 10:43:50.931999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.235 [2024-11-20 10:43:50.932006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.235 [2024-11-20 10:43:50.943741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.235 [2024-11-20 10:43:50.944111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.235 [2024-11-20 10:43:50.944129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.235 [2024-11-20 10:43:50.944137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.235 [2024-11-20 10:43:50.944299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.235 [2024-11-20 10:43:50.944462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.235 [2024-11-20 10:43:50.944472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.235 [2024-11-20 10:43:50.944478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.235 [2024-11-20 10:43:50.944485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.235 [2024-11-20 10:43:50.956629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.235 [2024-11-20 10:43:50.956936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.235 [2024-11-20 10:43:50.956958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.235 [2024-11-20 10:43:50.956966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.235 [2024-11-20 10:43:50.957156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.235 [2024-11-20 10:43:50.957333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.235 [2024-11-20 10:43:50.957346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.235 [2024-11-20 10:43:50.957353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.235 [2024-11-20 10:43:50.957361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.493 [2024-11-20 10:43:50.969796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.493 [2024-11-20 10:43:50.970207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.493 [2024-11-20 10:43:50.970225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.493 [2024-11-20 10:43:50.970233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.493 [2024-11-20 10:43:50.970406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.493 [2024-11-20 10:43:50.970580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.493 [2024-11-20 10:43:50.970589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.493 [2024-11-20 10:43:50.970596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.493 [2024-11-20 10:43:50.970602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.493 [2024-11-20 10:43:50.982758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.493 [2024-11-20 10:43:50.983150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.493 [2024-11-20 10:43:50.983168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.493 [2024-11-20 10:43:50.983176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.493 [2024-11-20 10:43:50.983347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.493 [2024-11-20 10:43:50.983519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.493 [2024-11-20 10:43:50.983528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.493 [2024-11-20 10:43:50.983535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.493 [2024-11-20 10:43:50.983542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.493 [2024-11-20 10:43:50.995610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.493 [2024-11-20 10:43:50.996066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.493 [2024-11-20 10:43:50.996084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.493 [2024-11-20 10:43:50.996092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.493 [2024-11-20 10:43:50.996270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.493 [2024-11-20 10:43:50.996433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.493 [2024-11-20 10:43:50.996443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.493 [2024-11-20 10:43:50.996450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.493 [2024-11-20 10:43:50.996460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.493 [2024-11-20 10:43:51.008508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.493 [2024-11-20 10:43:51.008866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.493 [2024-11-20 10:43:51.008884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.493 [2024-11-20 10:43:51.008892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.493 [2024-11-20 10:43:51.009058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.493 [2024-11-20 10:43:51.009222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.493 [2024-11-20 10:43:51.009231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.493 [2024-11-20 10:43:51.009238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.009244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.021386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.021758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.021775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.021783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.021945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.022137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.022155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.022163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.022170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.034231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.034616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.034662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.034686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.035282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.035816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.035825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.035831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.035837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.047119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.047461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.047477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.047485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.047646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.047810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.047819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.047826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.047833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.060030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.060429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.060446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.060454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.060615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.060779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.060788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.060795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.060801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.072844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.073261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.073303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.073329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.073848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.074018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.074027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.074034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.074041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.085758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.086178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.086196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.086204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.086370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.086532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.086541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.086548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.086554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.098597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.098920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.098937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.098945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.099113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.099277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.099287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.099293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.099299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.111511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.111926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.111943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.111956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.112141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.112315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.112324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.112331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.112338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.124341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.124741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.124759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.124767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.124931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.125100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.494 [2024-11-20 10:43:51.125115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.494 [2024-11-20 10:43:51.125121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.494 [2024-11-20 10:43:51.125128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.494 [2024-11-20 10:43:51.137322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.494 [2024-11-20 10:43:51.137689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.494 [2024-11-20 10:43:51.137733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.494 [2024-11-20 10:43:51.137757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.494 [2024-11-20 10:43:51.138217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.494 [2024-11-20 10:43:51.138390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.138401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.138409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.138416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.495 [2024-11-20 10:43:51.150265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.495 [2024-11-20 10:43:51.150617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.495 [2024-11-20 10:43:51.150661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.495 [2024-11-20 10:43:51.150684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.495 [2024-11-20 10:43:51.151195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.495 [2024-11-20 10:43:51.151359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.151369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.151375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.151381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.495 [2024-11-20 10:43:51.163197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.495 [2024-11-20 10:43:51.163611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.495 [2024-11-20 10:43:51.163627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.495 [2024-11-20 10:43:51.163635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.495 [2024-11-20 10:43:51.163797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.495 [2024-11-20 10:43:51.163967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.163977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.163984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.163994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.495 [2024-11-20 10:43:51.176027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.495 [2024-11-20 10:43:51.176423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.495 [2024-11-20 10:43:51.176439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.495 [2024-11-20 10:43:51.176447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.495 [2024-11-20 10:43:51.176608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.495 [2024-11-20 10:43:51.176770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.176780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.176786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.176792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.495 [2024-11-20 10:43:51.188853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.495 [2024-11-20 10:43:51.189292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.495 [2024-11-20 10:43:51.189337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.495 [2024-11-20 10:43:51.189361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.495 [2024-11-20 10:43:51.189938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.495 [2024-11-20 10:43:51.190415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.190424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.190430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.190436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.495 [2024-11-20 10:43:51.201987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.495 [2024-11-20 10:43:51.202415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.495 [2024-11-20 10:43:51.202458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.495 [2024-11-20 10:43:51.202482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.495 [2024-11-20 10:43:51.203073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.495 [2024-11-20 10:43:51.203521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.203530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.203537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.203543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.495 [2024-11-20 10:43:51.214864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.495 [2024-11-20 10:43:51.215299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.495 [2024-11-20 10:43:51.215342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.495 [2024-11-20 10:43:51.215367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.495 [2024-11-20 10:43:51.215944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.495 [2024-11-20 10:43:51.216500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.495 [2024-11-20 10:43:51.216509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.495 [2024-11-20 10:43:51.216515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.495 [2024-11-20 10:43:51.216522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.755 [2024-11-20 10:43:51.228011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.755 [2024-11-20 10:43:51.228374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.755 [2024-11-20 10:43:51.228391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.755 [2024-11-20 10:43:51.228400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.755 [2024-11-20 10:43:51.228583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.755 [2024-11-20 10:43:51.228760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.755 [2024-11-20 10:43:51.228769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.755 [2024-11-20 10:43:51.228775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.755 [2024-11-20 10:43:51.228782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.755 [2024-11-20 10:43:51.240824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.755 [2024-11-20 10:43:51.241165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.755 [2024-11-20 10:43:51.241182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.755 [2024-11-20 10:43:51.241190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.755 [2024-11-20 10:43:51.241352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.755 [2024-11-20 10:43:51.241514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.755 [2024-11-20 10:43:51.241523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.755 [2024-11-20 10:43:51.241530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.755 [2024-11-20 10:43:51.241536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.755 [2024-11-20 10:43:51.253675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.755 [2024-11-20 10:43:51.254067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.755 [2024-11-20 10:43:51.254085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.755 [2024-11-20 10:43:51.254092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.254258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.254421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.254430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.254437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.254444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.266472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.266886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.266903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.266910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.267078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.267241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.267251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.267257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.267264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.279429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.279853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.279870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.279878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.280047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.280211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.280219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.280226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.280232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 5600.80 IOPS, 21.88 MiB/s [2024-11-20T09:43:51.487Z] [2024-11-20 10:43:51.292556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.292902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.292918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.292926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.293093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.293257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.293269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.293276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.293282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.305415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.305815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.305832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.305839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.306008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.306172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.306181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.306188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.306195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.318226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.318644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.318660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.318668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.318830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.319000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.319010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.319016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.319023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.331045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.331383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.331426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.331450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.332043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.332574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.332583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.332590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.332599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.343877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.344224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.344242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.344249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.344411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.344574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.344583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.344590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.344598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.356679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.357093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.357110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.357118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.357281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.357444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.357453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.357460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.357467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.369509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.756 [2024-11-20 10:43:51.369834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.756 [2024-11-20 10:43:51.369855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.756 [2024-11-20 10:43:51.369862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.756 [2024-11-20 10:43:51.370029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.756 [2024-11-20 10:43:51.370192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.756 [2024-11-20 10:43:51.370201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.756 [2024-11-20 10:43:51.370208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.756 [2024-11-20 10:43:51.370214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.756 [2024-11-20 10:43:51.382409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.382832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.382886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.382910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.383469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.383635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.383644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.383652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.383659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.395556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.395907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.395965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.395990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.396561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.396739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.396749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.396755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.396762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.408560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.408875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.408892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.408899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.409068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.409232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.409241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.409248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.409254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.421441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.421839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.421856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.421864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.422036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.422200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.422210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.422216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.422223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.434348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.434715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.434760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.434783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.435375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.435921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.435930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.435936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.435944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.447156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.447573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.447590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.447598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.447760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.447923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.447932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.447939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.447945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.460029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.460430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.460476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.460499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.461016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.461199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.461211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.461218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.461226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.757 [2024-11-20 10:43:51.472994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.757 [2024-11-20 10:43:51.473403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.757 [2024-11-20 10:43:51.473419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:50.757 [2024-11-20 10:43:51.473427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:50.757 [2024-11-20 10:43:51.473588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:50.757 [2024-11-20 10:43:51.473751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.757 [2024-11-20 10:43:51.473760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.757 [2024-11-20 10:43:51.473766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.757 [2024-11-20 10:43:51.473772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.486016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.486419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.486436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.486444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.486606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.486770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.486779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.018 [2024-11-20 10:43:51.486785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.018 [2024-11-20 10:43:51.486792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.498834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.499232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.499249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.499257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.499419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.499582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.499591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.018 [2024-11-20 10:43:51.499598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.018 [2024-11-20 10:43:51.499615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.511660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.512066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.512074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.512236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.512400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.512409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.018 [2024-11-20 10:43:51.512415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.018 [2024-11-20 10:43:51.512422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.524566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.524997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.525044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.525068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.525646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.526203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.526214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.018 [2024-11-20 10:43:51.526221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.018 [2024-11-20 10:43:51.526228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.537367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.537696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.537713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.537720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.537882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.538051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.538061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.018 [2024-11-20 10:43:51.538067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.018 [2024-11-20 10:43:51.538074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.550309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.550673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.550717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.550741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.551259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.551424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.551433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.018 [2024-11-20 10:43:51.551440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.018 [2024-11-20 10:43:51.551447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.018 [2024-11-20 10:43:51.563281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.018 [2024-11-20 10:43:51.563624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.018 [2024-11-20 10:43:51.563641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.018 [2024-11-20 10:43:51.563649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.018 [2024-11-20 10:43:51.563822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.018 [2024-11-20 10:43:51.564000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.018 [2024-11-20 10:43:51.564010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.564017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.564023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.576082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.576368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.576387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.576394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.576557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.576720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.576730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.576736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.576743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.588967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.589316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.589332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.589340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.589506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.589670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.589680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.589686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.589693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.601880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.602231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.602249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.602256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.602419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.602582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.602592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.602598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.602604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.614745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.615069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.615086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.615094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.615266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.615438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.615448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.615454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.615461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.627633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.627970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.627988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.627996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.628158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.628322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.628334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.628341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.628348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.640552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.640823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.640840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.640848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.641016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.641179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.641189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.641195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.641202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.653630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.653930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.653954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.653962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.654155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.019 [2024-11-20 10:43:51.654334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.019 [2024-11-20 10:43:51.654344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.019 [2024-11-20 10:43:51.654351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.019 [2024-11-20 10:43:51.654358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.019 [2024-11-20 10:43:51.666673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.019 [2024-11-20 10:43:51.666975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.019 [2024-11-20 10:43:51.666995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.019 [2024-11-20 10:43:51.667003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.019 [2024-11-20 10:43:51.667180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.667359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.667369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.667376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.667387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.020 [2024-11-20 10:43:51.679710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.020 [2024-11-20 10:43:51.680121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.020 [2024-11-20 10:43:51.680139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.020 [2024-11-20 10:43:51.680148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.020 [2024-11-20 10:43:51.680325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.680503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.680513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.680521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.680528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.020 [2024-11-20 10:43:51.692848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.020 [2024-11-20 10:43:51.693209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.020 [2024-11-20 10:43:51.693228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.020 [2024-11-20 10:43:51.693237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.020 [2024-11-20 10:43:51.693413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.693590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.693600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.693607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.693614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.020 [2024-11-20 10:43:51.705929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.020 [2024-11-20 10:43:51.706294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.020 [2024-11-20 10:43:51.706312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.020 [2024-11-20 10:43:51.706320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.020 [2024-11-20 10:43:51.706497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.706675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.706685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.706693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.706700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.020 [2024-11-20 10:43:51.719006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.020 [2024-11-20 10:43:51.719374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.020 [2024-11-20 10:43:51.719391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.020 [2024-11-20 10:43:51.719399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.020 [2024-11-20 10:43:51.719576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.719753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.719763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.719770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.719777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.020 [2024-11-20 10:43:51.732091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.020 [2024-11-20 10:43:51.732523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.020 [2024-11-20 10:43:51.732542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.020 [2024-11-20 10:43:51.732550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.020 [2024-11-20 10:43:51.732727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.732906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.732915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.732922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.732929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.020 [2024-11-20 10:43:51.745243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.020 [2024-11-20 10:43:51.745678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.020 [2024-11-20 10:43:51.745696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.020 [2024-11-20 10:43:51.745704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.020 [2024-11-20 10:43:51.745882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.020 [2024-11-20 10:43:51.746065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.020 [2024-11-20 10:43:51.746075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.020 [2024-11-20 10:43:51.746082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.020 [2024-11-20 10:43:51.746089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.279 [2024-11-20 10:43:51.758403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.758815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.758833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.758841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.759028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.759206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.280 [2024-11-20 10:43:51.759217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.280 [2024-11-20 10:43:51.759223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.280 [2024-11-20 10:43:51.759230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.280 [2024-11-20 10:43:51.771538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.771966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.771985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.771995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.772174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.772353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.280 [2024-11-20 10:43:51.772362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.280 [2024-11-20 10:43:51.772370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.280 [2024-11-20 10:43:51.772378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.280 [2024-11-20 10:43:51.784705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.785134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.785152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.785160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.785337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.785515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.280 [2024-11-20 10:43:51.785526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.280 [2024-11-20 10:43:51.785536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.280 [2024-11-20 10:43:51.785544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.280 [2024-11-20 10:43:51.797849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.798147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.798165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.798173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.798351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.798528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.280 [2024-11-20 10:43:51.798542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.280 [2024-11-20 10:43:51.798550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.280 [2024-11-20 10:43:51.798557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3639225 Killed "${NVMF_APP[@]}" "$@" 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3640555 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3640555 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3640555 ']' 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.280 10:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.280 [2024-11-20 10:43:51.811006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.811435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.811452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.811460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.811638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.811816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.280 [2024-11-20 10:43:51.811826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.280 [2024-11-20 10:43:51.811832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.280 [2024-11-20 10:43:51.811840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.280 [2024-11-20 10:43:51.824066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.824434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.824452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.824460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.824636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.824819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.280 [2024-11-20 10:43:51.824829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.280 [2024-11-20 10:43:51.824836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.280 [2024-11-20 10:43:51.824843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.280 [2024-11-20 10:43:51.837178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.280 [2024-11-20 10:43:51.837563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.280 [2024-11-20 10:43:51.837581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.280 [2024-11-20 10:43:51.837589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.280 [2024-11-20 10:43:51.837766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.280 [2024-11-20 10:43:51.837945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.837960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.837967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.837974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.850295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.850671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.850688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.850696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.850868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.851047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.851058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.851065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.851071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.858091] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:26:51.281 [2024-11-20 10:43:51.858132] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.281 [2024-11-20 10:43:51.863488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.863820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.863839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.863846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.864047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.864227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.864237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.864244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.864252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.876547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.876876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.876895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.876904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.877087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.877265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.877275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.877282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.877289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.889526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.889815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.889834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.889842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.890021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.890194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.890203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.890210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.890217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.902631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.902978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.902998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.903006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.903183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.903362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.903373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.903386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.903393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.915729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.916123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.916142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.916151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.916328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.916506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.916516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.916523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.916531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.923680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.281 [2024-11-20 10:43:51.928801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.929148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.929167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.929175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.929347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.281 [2024-11-20 10:43:51.929520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.281 [2024-11-20 10:43:51.929533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.281 [2024-11-20 10:43:51.929543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.281 [2024-11-20 10:43:51.929551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.281 [2024-11-20 10:43:51.941832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.281 [2024-11-20 10:43:51.942201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.281 [2024-11-20 10:43:51.942220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.281 [2024-11-20 10:43:51.942229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.281 [2024-11-20 10:43:51.942400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.282 [2024-11-20 10:43:51.942574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.282 [2024-11-20 10:43:51.942584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.282 [2024-11-20 10:43:51.942590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.282 [2024-11-20 10:43:51.942602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.282 [2024-11-20 10:43:51.954923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.282 [2024-11-20 10:43:51.955281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.282 [2024-11-20 10:43:51.955299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.282 [2024-11-20 10:43:51.955308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.282 [2024-11-20 10:43:51.955482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.282 [2024-11-20 10:43:51.955654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.282 [2024-11-20 10:43:51.955664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.282 [2024-11-20 10:43:51.955673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.282 [2024-11-20 10:43:51.955680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.282 [2024-11-20 10:43:51.966920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.282 [2024-11-20 10:43:51.966945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.282 [2024-11-20 10:43:51.966956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.282 [2024-11-20 10:43:51.966962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.282 [2024-11-20 10:43:51.966968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.282 [2024-11-20 10:43:51.967903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.282 [2024-11-20 10:43:51.968327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.282 [2024-11-20 10:43:51.968346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.282 [2024-11-20 10:43:51.968355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.282 [2024-11-20 10:43:51.968336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.282 [2024-11-20 10:43:51.968527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.282 [2024-11-20 10:43:51.968700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.282 [2024-11-20 10:43:51.968710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.282 [2024-11-20 10:43:51.968716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.282 [2024-11-20 10:43:51.968724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.282 [2024-11-20 10:43:51.971962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.282 [2024-11-20 10:43:51.971965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.282 [2024-11-20 10:43:51.981020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.282 [2024-11-20 10:43:51.981378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.282 [2024-11-20 10:43:51.981398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.282 [2024-11-20 10:43:51.981406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.282 [2024-11-20 10:43:51.981588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.282 [2024-11-20 10:43:51.981767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.282 [2024-11-20 10:43:51.981777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.282 [2024-11-20 10:43:51.981786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.282 [2024-11-20 10:43:51.981794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.282 [2024-11-20 10:43:51.994116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.282 [2024-11-20 10:43:51.994568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.282 [2024-11-20 10:43:51.994588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.282 [2024-11-20 10:43:51.994596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.282 [2024-11-20 10:43:51.994774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.282 [2024-11-20 10:43:51.994957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.282 [2024-11-20 10:43:51.994968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.282 [2024-11-20 10:43:51.994976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.282 [2024-11-20 10:43:51.994983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.282 [2024-11-20 10:43:52.007283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.282 [2024-11-20 10:43:52.007730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.282 [2024-11-20 10:43:52.007751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.282 [2024-11-20 10:43:52.007760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.282 [2024-11-20 10:43:52.007938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.282 [2024-11-20 10:43:52.008125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.282 [2024-11-20 10:43:52.008135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.282 [2024-11-20 10:43:52.008143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.282 [2024-11-20 10:43:52.008151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 [2024-11-20 10:43:52.020476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.020924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.020945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.020961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.021139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.021318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.021334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.021342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.021349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 [2024-11-20 10:43:52.033651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.034002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.034022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.034031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.034209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.034387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.034397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.034405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.034412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 [2024-11-20 10:43:52.046719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.047176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.047196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.047206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.047385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.047564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.047574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.047582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.047590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 [2024-11-20 10:43:52.059894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.060322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.060341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.060349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.060527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.060706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.060716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.060723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.060730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.550 [2024-11-20 10:43:52.073051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.073394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.073412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.073420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.073597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.073774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.073786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.073796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.073805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 [2024-11-20 10:43:52.086121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.086516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.086534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.086542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.086718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.086896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.086907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.086914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.086920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 [2024-11-20 10:43:52.099231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.099574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.099593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.099601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.099777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.099961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.550 [2024-11-20 10:43:52.099971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.550 [2024-11-20 10:43:52.099983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.550 [2024-11-20 10:43:52.099991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.550 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.550 [2024-11-20 10:43:52.112298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.550 [2024-11-20 10:43:52.112664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.550 [2024-11-20 10:43:52.112681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.550 [2024-11-20 10:43:52.112689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.550 [2024-11-20 10:43:52.112760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.550 [2024-11-20 10:43:52.112865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.550 [2024-11-20 10:43:52.113047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.551 [2024-11-20 10:43:52.113057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.551 [2024-11-20 10:43:52.113064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.551 [2024-11-20 10:43:52.113070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.551 [2024-11-20 10:43:52.125379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.551 [2024-11-20 10:43:52.125795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.551 [2024-11-20 10:43:52.125813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.551 [2024-11-20 10:43:52.125822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.551 [2024-11-20 10:43:52.126004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.551 [2024-11-20 10:43:52.126182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.551 [2024-11-20 10:43:52.126192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.551 [2024-11-20 10:43:52.126199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.551 [2024-11-20 10:43:52.126207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.551 [2024-11-20 10:43:52.138515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.551 [2024-11-20 10:43:52.138956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.551 [2024-11-20 10:43:52.138974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.551 [2024-11-20 10:43:52.138986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.551 [2024-11-20 10:43:52.139163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.551 [2024-11-20 10:43:52.139341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.551 [2024-11-20 10:43:52.139351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.551 [2024-11-20 10:43:52.139359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.551 [2024-11-20 10:43:52.139366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.551 Malloc0 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.551 [2024-11-20 10:43:52.151661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.551 [2024-11-20 10:43:52.152093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.551 [2024-11-20 10:43:52.152111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.551 [2024-11-20 10:43:52.152119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.551 [2024-11-20 10:43:52.152296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.551 [2024-11-20 10:43:52.152473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.551 [2024-11-20 10:43:52.152484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.551 [2024-11-20 10:43:52.152490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.551 [2024-11-20 10:43:52.152497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.551 [2024-11-20 10:43:52.164801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.551 [2024-11-20 10:43:52.165153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.551 [2024-11-20 10:43:52.165171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8500 with addr=10.0.0.2, port=4420 00:26:51.551 [2024-11-20 10:43:52.165179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8500 is same with the state(6) to be set 00:26:51.551 [2024-11-20 10:43:52.165356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8500 (9): Bad file descriptor 00:26:51.551 [2024-11-20 10:43:52.165533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.551 [2024-11-20 10:43:52.165543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.551 [2024-11-20 10:43:52.165551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.551 [2024-11-20 10:43:52.165563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.551 [2024-11-20 10:43:52.171821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.551 10:43:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3639491 00:26:51.551 [2024-11-20 10:43:52.177876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.831 [2024-11-20 10:43:52.289181] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:52.808 4667.67 IOPS, 18.23 MiB/s [2024-11-20T09:43:54.475Z] 5578.00 IOPS, 21.79 MiB/s [2024-11-20T09:43:55.411Z] 6282.38 IOPS, 24.54 MiB/s [2024-11-20T09:43:56.348Z] 6810.67 IOPS, 26.60 MiB/s [2024-11-20T09:43:57.726Z] 7238.00 IOPS, 28.27 MiB/s [2024-11-20T09:43:58.662Z] 7603.09 IOPS, 29.70 MiB/s [2024-11-20T09:43:59.599Z] 7896.00 IOPS, 30.84 MiB/s [2024-11-20T09:44:00.534Z] 8146.15 IOPS, 31.82 MiB/s [2024-11-20T09:44:01.475Z] 8358.36 IOPS, 32.65 MiB/s [2024-11-20T09:44:01.475Z] 8542.80 IOPS, 33.37 MiB/s 00:27:00.744 Latency(us) 00:27:00.744 [2024-11-20T09:44:01.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.744 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:00.744 Verification LBA range: start 0x0 length 0x4000 00:27:00.744 Nvme1n1 : 15.01 8545.68 33.38 11157.57 0.00 6476.44 658.92 14474.91 00:27:00.744 [2024-11-20T09:44:01.475Z] =================================================================================================================== 00:27:00.744 [2024-11-20T09:44:01.475Z] Total : 8545.68 33.38 11157.57 0.00 6476.44 658.92 14474.91 00:27:01.003 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:01.003 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.003 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.004 rmmod nvme_tcp 00:27:01.004 rmmod nvme_fabrics 00:27:01.004 rmmod nvme_keyring 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3640555 ']' 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3640555 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3640555 ']' 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3640555 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3640555 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3640555' 00:27:01.004 killing process with pid 3640555 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3640555 00:27:01.004 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3640555 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.263 10:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.168 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.168 00:27:03.168 real 0m26.199s 00:27:03.168 user 1m1.259s 00:27:03.168 sys 0m6.847s 00:27:03.168 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.168 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.168 ************************************ 00:27:03.168 END TEST nvmf_bdevperf 00:27:03.168 ************************************ 00:27:03.428 10:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:03.428 10:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:03.428 10:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.428 10:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.428 ************************************ 00:27:03.428 START TEST nvmf_target_disconnect 00:27:03.428 ************************************ 00:27:03.428 10:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:03.428 * Looking for test storage... 00:27:03.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:03.428 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:03.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.429 --rc genhtml_branch_coverage=1 00:27:03.429 --rc genhtml_function_coverage=1 00:27:03.429 --rc genhtml_legend=1 00:27:03.429 --rc geninfo_all_blocks=1 00:27:03.429 --rc geninfo_unexecuted_blocks=1 00:27:03.429 00:27:03.429 ' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:03.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.429 --rc genhtml_branch_coverage=1 00:27:03.429 --rc genhtml_function_coverage=1 00:27:03.429 --rc genhtml_legend=1 00:27:03.429 --rc geninfo_all_blocks=1 00:27:03.429 --rc geninfo_unexecuted_blocks=1 00:27:03.429 00:27:03.429 ' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:03.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.429 --rc genhtml_branch_coverage=1 00:27:03.429 --rc genhtml_function_coverage=1 00:27:03.429 --rc genhtml_legend=1 00:27:03.429 --rc geninfo_all_blocks=1 00:27:03.429 --rc geninfo_unexecuted_blocks=1 00:27:03.429 00:27:03.429 ' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:03.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.429 --rc genhtml_branch_coverage=1 00:27:03.429 --rc genhtml_function_coverage=1 00:27:03.429 --rc genhtml_legend=1 00:27:03.429 --rc geninfo_all_blocks=1 00:27:03.429 --rc geninfo_unexecuted_blocks=1 00:27:03.429 00:27:03.429 ' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.429 10:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:10.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:10.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:10.001 Found net devices under 0000:86:00.0: cvl_0_0 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.001 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:10.001 Found net devices under 0000:86:00.1: cvl_0_1 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:27:10.002 00:27:10.002 --- 10.0.0.2 ping statistics --- 00:27:10.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.002 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:10.002 00:27:10.002 --- 10.0.0.1 ping statistics --- 00:27:10.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.002 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.002 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 ************************************ 00:27:10.002 START TEST nvmf_target_disconnect_tc1 00:27:10.002 ************************************ 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.002 [2024-11-20 10:44:10.159881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.002 [2024-11-20 10:44:10.159934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7ab0 with addr=10.0.0.2, port=4420 00:27:10.002 [2024-11-20 10:44:10.159959] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:10.002 [2024-11-20 10:44:10.159969] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:10.002 [2024-11-20 10:44:10.159975] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:10.002 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:10.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:10.002 Initializing NVMe Controllers 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.002 00:27:10.002 real 0m0.106s 00:27:10.002 user 0m0.051s 00:27:10.002 sys 0m0.055s 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.002 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 ************************************ 00:27:10.002 END TEST nvmf_target_disconnect_tc1 00:27:10.002 ************************************ 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 ************************************ 00:27:10.003 START TEST nvmf_target_disconnect_tc2 00:27:10.003 ************************************ 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3645580 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3645580 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3645580 ']' 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 [2024-11-20 10:44:10.300829] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:27:10.003 [2024-11-20 10:44:10.300873] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.003 [2024-11-20 10:44:10.379048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.003 [2024-11-20 10:44:10.423521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.003 [2024-11-20 10:44:10.423554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.003 [2024-11-20 10:44:10.423562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.003 [2024-11-20 10:44:10.423570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.003 [2024-11-20 10:44:10.423575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.003 [2024-11-20 10:44:10.425093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:10.003 [2024-11-20 10:44:10.425202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:10.003 [2024-11-20 10:44:10.425311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.003 [2024-11-20 10:44:10.425311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 Malloc0 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 [2024-11-20 10:44:10.596678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 [2024-11-20 10:44:10.628944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3645807 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:10.003 10:44:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.554 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3645580 00:27:12.554 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 [2024-11-20 10:44:12.657430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 [2024-11-20 10:44:12.657631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Write completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.554 Read completed with error (sct=0, sc=8) 00:27:12.554 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 [2024-11-20 10:44:12.657830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Write completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 Read completed with error (sct=0, sc=8) 00:27:12.555 starting I/O failed 00:27:12.555 [2024-11-20 10:44:12.658049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.555 [2024-11-20 10:44:12.658240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.658263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.658384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.658396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.658565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.658596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.658774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.658807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.658995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.659029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.659227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.659258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.659384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.659415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.659619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.659652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.659847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.659879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.660037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.660072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.660197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.660229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.660337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.660368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.660540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.660571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.660773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.660805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.660926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.555 [2024-11-20 10:44:12.660969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.555 qpair failed and we were unable to recover it. 00:27:12.555 [2024-11-20 10:44:12.661213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.661244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.661371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.661402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.661604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.661634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.661812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.661843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.661980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.662190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.662458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.662626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.662789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.662865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.662953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.662965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.663921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.663963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.664074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.664114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.664247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.664279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.664387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.664418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.664584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.664597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.664734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.664767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.664878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.664910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.665023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.665056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.665161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.665193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.665476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.665507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.665637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.665668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.665871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.665903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.666153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.666187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.666298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.666331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.666440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.666472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.666655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.666688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.666804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.666836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.666959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.666993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.667165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.667198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.667317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.667349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.556 [2024-11-20 10:44:12.667461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.556 [2024-11-20 10:44:12.667492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.556 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.667728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.667761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.667967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.668000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.668214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.668327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.668360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.668478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.668672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.668689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.668896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.668912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.669001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.669017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.669155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.669170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.669260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.669275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.669489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.669523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.669727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.669760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.669959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.670004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.670111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.670128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.670279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.670318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.670434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.670467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.670663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.670696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.670900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.671094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.671128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.671315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.671524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.671556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.671690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.671723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.671861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.671893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.672953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.672968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.673054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.673069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.673220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.673235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.557 qpair failed and we were unable to recover it. 00:27:12.557 [2024-11-20 10:44:12.673309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.557 [2024-11-20 10:44:12.673323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.673469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.673487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.673631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.673647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.673725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.673739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.673832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.673847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.674964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.674981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.675905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.675936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.676138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.676171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.676366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.676398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.676578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.676594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.676674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.676688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.676755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.676770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.676870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.676885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.677033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.677049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.677193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.677225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.677333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.677370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.677562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.677594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.677817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.677849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.677969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.678010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.678222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.678255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.678509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.678541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.678715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.678747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.678965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.679175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.679190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.679411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.679444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.558 [2024-11-20 10:44:12.679621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.558 [2024-11-20 10:44:12.679653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.558 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.679894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.679941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.680094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.680110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.680329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.680361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.680608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.680640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.680860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.680893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.681020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.681053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.681234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.681267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.681440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.681473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.681679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.681712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.681902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.681935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.682075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.682110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.682284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.682317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.682493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.682525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.682764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.682779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.682942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.682973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.683942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.683966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.684943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.684962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.685204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.685237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.685354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.685386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.685628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.559 [2024-11-20 10:44:12.685660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.559 qpair failed and we were unable to recover it. 00:27:12.559 [2024-11-20 10:44:12.685858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.685875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.685943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.685969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.686967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.686984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.687931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.687945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.688092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.688108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.688196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.688210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.688347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.688363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.688523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.688555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.688736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.688768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.688873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.688905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.689167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.689184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.689269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.689305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.689475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.689507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.689641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.689672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.689850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.689882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.690008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.690044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.690313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.690346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.690559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.690795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.690828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.691004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.691020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.691271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.691304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.691562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.691595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.691698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.691787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.691801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.560 [2024-11-20 10:44:12.692035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.560 [2024-11-20 10:44:12.692068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.560 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.692277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.692308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.692480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.692511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.692757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.692789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.693026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.693059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.693294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.693326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.693499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.693531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.693713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.693729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.693892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.693907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.694164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.694201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.694335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.694367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.694548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.694581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.694765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.694781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.694996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.695030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.695139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.695178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.695417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.695449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.695679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.695695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.695846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.695861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.696046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.696079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.696196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.696229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.696364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.696396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.696572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.696605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.696765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.696781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.696937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.696958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.697190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.697222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.697352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.697384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.697627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.697658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.697836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.697853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.697940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.698097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.698113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.698266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.698298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.698496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.698529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.698652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.698685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.698817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.698855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.699054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.699070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.699166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.699181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.699258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.699273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.699414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.699446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.561 [2024-11-20 10:44:12.699719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.561 qpair failed and we were unable to recover it. 00:27:12.561 [2024-11-20 10:44:12.699830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.699862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.700046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.700062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.700148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.700166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.700308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.700324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.700473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.700504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.700693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.700725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.700927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.700968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.701148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.701164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.701325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.701556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.701587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.701692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.701724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.701840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.701874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.701964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.701983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.702188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.702204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.702440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.702472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.702657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.702690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.702899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.702932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.703115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.703131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.703265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.703280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.703425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.703440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.703669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.703700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.703825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.703856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.704156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.704348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.704380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.704565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.704597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.704831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.704923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.704938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.705170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.705186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.705324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.705356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.705593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.705626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.705838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.705878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.705971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.705987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.706159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.706280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.706312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.706576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.706607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.706741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.706757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.706904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.706919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.707073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.562 [2024-11-20 10:44:12.707089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.562 qpair failed and we were unable to recover it. 00:27:12.562 [2024-11-20 10:44:12.707223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.707396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.707498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.707592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.707743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.707837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.707930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.707944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.708048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.708078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.708313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.708345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.708527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.708559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.708670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.708703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.708899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.708931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.709180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.709213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.709344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.709375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.709634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.709666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.709906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.709939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.710103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.710137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.710310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.710341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.710512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.710544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.710739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.710754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.710901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.710933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.711124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.711157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.711286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.711319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.711555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.711587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.711706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.711722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.711959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.711975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.712066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.712081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.712290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.712323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.712524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.712556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.712689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.712721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.712921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.712937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.713099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.713116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.713280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.713299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.563 [2024-11-20 10:44:12.713524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.563 [2024-11-20 10:44:12.713539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.563 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.713726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.713741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.713970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.713987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.714151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.714167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.714413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.714445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.714627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.714658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.714840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.714872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.715044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.715076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.715248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.715279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.715453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.715486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.715605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.715636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.715842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.715875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.716152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.716168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.716305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.716322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.716569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.716585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.716723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.716755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.716884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.716917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.717045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.717077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.717205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.717237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.717529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.717562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.717700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.717733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.718040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.718079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.718368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.718401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.718581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.718612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.718746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.718761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.718978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.719013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.719134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.719173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.719347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.719378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.719578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.719612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.719798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.719831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.720020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.720037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.720147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.720180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.720357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.720389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.720572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.720605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.720781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.720797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.720976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.721010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.721202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.721235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.721419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.721450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.564 [2024-11-20 10:44:12.721631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.564 [2024-11-20 10:44:12.721664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.564 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.721902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.721934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.722180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.722197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.722396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.722412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.722502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.722516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.722716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.722732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.722877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.722892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.723057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.723215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.723310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.723543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.723661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.723856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.723975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.724009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.724219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.724251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.724385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.724417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.724538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.724572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.724694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.724725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.724902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.724935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.725134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.725151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.725238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.725284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.725547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.725592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.725769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.725784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.725894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.725927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.726066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.726101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.726280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.726315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.726571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.726605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.726788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.726821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.727002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.727036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.727168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.727184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.727335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.727351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.727491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.727506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.727643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.727674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.727863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.727895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.728095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.728128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.728253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.728286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.728568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.728601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.728725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.728758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.728872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.565 [2024-11-20 10:44:12.728904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.565 qpair failed and we were unable to recover it. 00:27:12.565 [2024-11-20 10:44:12.729080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.729096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.729236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.729252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.729399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.729414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.729570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.729602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.729792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.729825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.730009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.730046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.730277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.730294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.730364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.730379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.730469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.730564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.730594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.730850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.730882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.731072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.731106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.731278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.731311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.731496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.731528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.731761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.731793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.731905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.731921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.731992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.732007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.732212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.732250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.732378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.732410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.732648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.732680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.732863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.732878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.732973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.732989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.733161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.733193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.733430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.733462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.733699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.733731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.733912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.733944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.734077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.734111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.734238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.734270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.734475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.734508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.734763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.734781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.734867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.734881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.566 [2024-11-20 10:44:12.735980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.566 [2024-11-20 10:44:12.735996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.566 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.736142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.736175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.736419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.736452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.736638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.736670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.736793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.736826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.737037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.737239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.737271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.737388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.737426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.737551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.737582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.737780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.737813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.737994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.738031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.738154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.738185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.738310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.738343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.738460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.738493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.738606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.738638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.738763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.738795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.739068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.739084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.739165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.739180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.739362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.739624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.739656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.739808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.739825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.739910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.739924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.740919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.741085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.741118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.741235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.741268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.741467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.741498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.741757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.741789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.741900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.741916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.742019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.742038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.742139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.742156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.742297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.742313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.742384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.742399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.742464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.567 [2024-11-20 10:44:12.742478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.567 qpair failed and we were unable to recover it. 00:27:12.567 [2024-11-20 10:44:12.742627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.742643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.742784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.742800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.742969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.742985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.743066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.743080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.743280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.743295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.743388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.743403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.743589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.743621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.743795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.743826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.743927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.743967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.744167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.744199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.744330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.744361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.744532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.744563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.744754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.744787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.744903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.744934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.745203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.745219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.745353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.745369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.745464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.745478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.745623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.745639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.745792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.745807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.745895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.745909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.746089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.746126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.746364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.746397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.746511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.746543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.746828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.746956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.746989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.747089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.747121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.747379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.747467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.747482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.747662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.747694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.747865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.747898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.748076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.748109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.748322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.748354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.748478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.748511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.748748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.748779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.568 [2024-11-20 10:44:12.748904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.568 [2024-11-20 10:44:12.748936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.568 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.749069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.749111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.749324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.749339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.749478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.749493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.749663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.749696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.749867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.749899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.750121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.750157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.750279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.750313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.750429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.750460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.750640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.750672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.750888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.751104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.751137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.751267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.751299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.751407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.751437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.751558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.751591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.751764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.751795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.752886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.752987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.753142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.753263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.753377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.753591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.753741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.753888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.753906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.754116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.754133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.754220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.754234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.754381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.754396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.754604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.754620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.754689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.754705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.754840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.754856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.755131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.755164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.755427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.755460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.569 [2024-11-20 10:44:12.755587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.569 [2024-11-20 10:44:12.755620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.569 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.755810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.755842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.756093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.756140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.756288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.756304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.756392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.756407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.756559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.756575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.756775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.756790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.756891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.756907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.757114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.757130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.757211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.757302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.757318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.757472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.757504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.757619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.757652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.757836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.757869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.758031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.758061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.758307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.758384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.758398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.758541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.758556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.758705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.758724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.758933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.758998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.759175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.759206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.759336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.759368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.759554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.759586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.759784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.759816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.759993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.760095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.760335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.760493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.760658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.760757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.760946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.760987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.761106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.761331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.761364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.761563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.761594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.761709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.761909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.761940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.762122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.762138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.762212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.570 [2024-11-20 10:44:12.762227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.570 qpair failed and we were unable to recover it. 00:27:12.570 [2024-11-20 10:44:12.762383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.762399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.762556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.762588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.762708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.762741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.762861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.762893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.763105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.763137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.763309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.763341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.763517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.763549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.763688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.763719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.763860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.763892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.764008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.764042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.764216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.764247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.764422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.764455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.764585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.764617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.764785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.764817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.764991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.765025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.765195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.765232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.765438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.765453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.765538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.765553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.765700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.765734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.765860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.765892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.766034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.766069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.766318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.766352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.766478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.766510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.766779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.766810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.766974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.766991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.767153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.767391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.767477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.767632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.767781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.767896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.767994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.768009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.768085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.768099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.768170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.768203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.768387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.768419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.768604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.768637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.768812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.768844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.768971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.769005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.571 [2024-11-20 10:44:12.769187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.571 [2024-11-20 10:44:12.769218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.571 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.769381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.769397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.769538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.769570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.769749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.769781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.770943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.770976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.771076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.771187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.771279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.771377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.771559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.771788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.771967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.772080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.772179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.772337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.772538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.772762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.772970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.772986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.773908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.773923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.774003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.774020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.774084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.774114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.774283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.774315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.774422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.774454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.774570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.572 [2024-11-20 10:44:12.774602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.572 qpair failed and we were unable to recover it. 00:27:12.572 [2024-11-20 10:44:12.774812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.774844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.774971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.775009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.775227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.775268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.775417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.775432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.775655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.775687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.775935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.775990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.776120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.776135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.776228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.776242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.776407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.776422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.776567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.776582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.776665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.776679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.776811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.776826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.777036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.777070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.777173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.777205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.777320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.777352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.777528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.777561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.777809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.777841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.778015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.778050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.778173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.778206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.778370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.778385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.778542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.778557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.778702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.778719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.778855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.778887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.779015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.779048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.779223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.779254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.779422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.779453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.779581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.779613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.779802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.779968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.780014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.780195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.780342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.780374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.780497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.780528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.780716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.780867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.780899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.781085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.781118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.781313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.781345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.573 qpair failed and we were unable to recover it. 00:27:12.573 [2024-11-20 10:44:12.781580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.573 [2024-11-20 10:44:12.781611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.781729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.781761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.782003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.782039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.782169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.782202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.782413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.782444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.782566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.782599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.782726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.782758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.782944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.782988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.783119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.783155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.783294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.783309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.783483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.783499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.783635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.783650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.783741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.783756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.783839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.783854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.784009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.784042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.784227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.784259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.784516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.784547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.784732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.784763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.785019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.785035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.785124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.785138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.785286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.785320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.785556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.785588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.785709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.785741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.785865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.785881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.786107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.786143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.786412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.786445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.786619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.786651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.786886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.786918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.787121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.787153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.787336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.787352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.787517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.787550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.787755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.787787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.787915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.787957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.788199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.788232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.788413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.788445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.788759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.788866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.788897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.789096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.574 [2024-11-20 10:44:12.789128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.574 qpair failed and we were unable to recover it. 00:27:12.574 [2024-11-20 10:44:12.789309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.789325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.789530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.789563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.789834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.789865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.789998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.790221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.790372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.790524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.790672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.790833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.790959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.790974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.791057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.791071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.791215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.791246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.791356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.791388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.791573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.791605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.791770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.791801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.792003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.792036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.792145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.792161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.792244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.792258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.792485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.792517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.792643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.792675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.792810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.792841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.793048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.793064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.793198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.793247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.793361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.793393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.793571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.793602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.793771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.793803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.793924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.793969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.794089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.794135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.794217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.794231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.794368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.794383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.794466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.794481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.794690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.794721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.794842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.794874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.795007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.795042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.795229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.795269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.795353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.795366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.795447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.795461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.575 [2024-11-20 10:44:12.795695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.575 [2024-11-20 10:44:12.795727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.575 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.795926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.795968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.796109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.796125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.796273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.796288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.796397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.796481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.796495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.796632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.796664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.796831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.797107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.797140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.797306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.797321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.797484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.797499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.797741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.797773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.797953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.797978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.798077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.798309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.798341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.798508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.798539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.798660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.798693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.798798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.798829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.799041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.799075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.799248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.799280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.799455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.799487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.799674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.799966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.800113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.800128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.800291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.800322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.800440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.800471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.800682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.800882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.800915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.801122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.801156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.801272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.801305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.801473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.801505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.801678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.801711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.801975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.802012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.802144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.802177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.802389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.802423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.802612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.802645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.802827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.802860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.803065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.803100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.803211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.576 [2024-11-20 10:44:12.803226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.576 qpair failed and we were unable to recover it. 00:27:12.576 [2024-11-20 10:44:12.803368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.803384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.803451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.803466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.803670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.803686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.803832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.803848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.803990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.804023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.804201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.804234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.804352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.804384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.804576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.804607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.804788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.804820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.804994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.805027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.805213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.805245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.805427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.805460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.805646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.805679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.805934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.805984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.806259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.806292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.806529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.806561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.806685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.806716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.806831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.806862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.807030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.807064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.807237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.807253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.807333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.807365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.807604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.807636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.807846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.808081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.808114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.808297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.808329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.808454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.808486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.808683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.808715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.808894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.808927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.809073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.809238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.809253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.809435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.809467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.809674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.809704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.809880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.810132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.810150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.577 [2024-11-20 10:44:12.810393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.577 [2024-11-20 10:44:12.810424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.577 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.810534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.810566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.810703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.810734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.810932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.810999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.811178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.811193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.811349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.811380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.811552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.811584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.811779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.811816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.811941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.811962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.812104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.812145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.812249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.812280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.812404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.812435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.812561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.812594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.812792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.812824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.812990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.813007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.813104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.813118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.813290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.813322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.813560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.813591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.813709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.813740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.813933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.813981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.814113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.814146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.814277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.814310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.814417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.814450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.814662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.814694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.814882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.814914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.815104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.815138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.815334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.815349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.815447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.815760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.815792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.816049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.816065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.816300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.816332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.816522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.816554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.816666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.816698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.816888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.816919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.817111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.817130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.817295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.817326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.817513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.817545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.817788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.817820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.817930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.578 [2024-11-20 10:44:12.817952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.578 qpair failed and we were unable to recover it. 00:27:12.578 [2024-11-20 10:44:12.818052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.818066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.818216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.818231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.818403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.818436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.818569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.818602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.818720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.818751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.818925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.818968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.819166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.819197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.819384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.819417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.819547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.819579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.819771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.819803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.819983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.820016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.820204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.820236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.820413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.820445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.820621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.820653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.820774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.820806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.821052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.821086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.821269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.821302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.821418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.821451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.821666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.821698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.821911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.821943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.822226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.822259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.822443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.822476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.822675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.822714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.822822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.822854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.823979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.823993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.824084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.824098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.824334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.824366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.824536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.824568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.824804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.824837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.824970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.825003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.825126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.825158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.579 [2024-11-20 10:44:12.825335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.579 [2024-11-20 10:44:12.825351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.579 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.825573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.825606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.825783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.825815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.826073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.826109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.826294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.826310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.826463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.826479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.826714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.826746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.827001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.827034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.827315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.827347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.827529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.827561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.827746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.827778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.827971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.828006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.828177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.828208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.828408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.828424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.828650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.828666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.828878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.829100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.829116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.829321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.829353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.829478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.829509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.829639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.829671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.829848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.829882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.830068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.830103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.830342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.830374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.830609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.830642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.830877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.830909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.831201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.831235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.831418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.831507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.831715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.831751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.831964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.832000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.832164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.832199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.832458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.832490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.832622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.832655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.832782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.832814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.832928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.832971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.833123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.833138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.833208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.833223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.833369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.833384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.833530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.833561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.833741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.580 [2024-11-20 10:44:12.833772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.580 qpair failed and we were unable to recover it. 00:27:12.580 [2024-11-20 10:44:12.833888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.833919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.834187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.834222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.834433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.834466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.834585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.834617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.834736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.834779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.835046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.835080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.835255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.835288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.835476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.835491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.835648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.835665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.835882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.835913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.836117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.836151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.836331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.836363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.836599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.836631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.836808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.836840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.837023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.837063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.837241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.837274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.837442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.837474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.837651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.837682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.837855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.837887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.837996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.838013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.838189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.838205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.838342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.838375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.838566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.838597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.838870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.838902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.839005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.839020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.839221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.839236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.839379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.839410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.839548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.839579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.839770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.839802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.839985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.840189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.840434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.840603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.840688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.840793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.581 [2024-11-20 10:44:12.840893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.581 [2024-11-20 10:44:12.840907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.581 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.841038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.841054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.841217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.841248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.841486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.841519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.841702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.841734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.841915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.841957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.842207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.842252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.842436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.842452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.842534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.842548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.842743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.842759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.842906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.842922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.843031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.843191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.843316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.843464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.843667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.843804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.843978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.844011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.844181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.844213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.844396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.844428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.844637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.844670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.844852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.844884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.844995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.845028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.845139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.845154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.845300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.845334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.845443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.845474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.845612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.845644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.845813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.845846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.845982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.846021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.846223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.846254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.846438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.846470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.846580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.846612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.582 [2024-11-20 10:44:12.846813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.582 [2024-11-20 10:44:12.846845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.582 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.847056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.847095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.847353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.847385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.847513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.847545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.847724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.847756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.847867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.847898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.848136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.848152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.848243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.848258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.848441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.848472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.848674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.848706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.848893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.848925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.849068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.849101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.849275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.849315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.849470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.849485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.849640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.849672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.849937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.850028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.850177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.850215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.850409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.850441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.850562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.850594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.850745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.851005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.851040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.851232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.851251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.851357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.851388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.851568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.851599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.851800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.851831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.851961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.851994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.852105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.852121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.852290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.852331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.852535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.852648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.852812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.852843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.853021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.853054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.853244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.853276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.853459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.853491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.853609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.853640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.853829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.853860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.583 qpair failed and we were unable to recover it. 00:27:12.583 [2024-11-20 10:44:12.854132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.583 [2024-11-20 10:44:12.854166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.854352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.854383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.854591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.854623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.854802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.854834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.855015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.855048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.855169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.855184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.855353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.855368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.855526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.855558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.855830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.855862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.856052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.856085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.856292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.856323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.856560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.856592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.856846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.856877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.857047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.857215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.857316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.857561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.857722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.857876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.857984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.858021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.858135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.858151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.858287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.858303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.858439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.858455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.858661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.858694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.858827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.858859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.859045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.859078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.859256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.859272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.859444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.859476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.859658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.859691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.859811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.859842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.860022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.860055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.860299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.860332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.860541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.860573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.860686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.860723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.860917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.861078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.861095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.584 qpair failed and we were unable to recover it. 00:27:12.584 [2024-11-20 10:44:12.861260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.584 [2024-11-20 10:44:12.861275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.861453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.861485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.861692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.861725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.861972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.862007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.862116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.862134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.862344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.862376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.862638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.862668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.862880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.862912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.863046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.863080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.863267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.863299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.863481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.863496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.863749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.863781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.863963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.863996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.864208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.864239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.864506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.864538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.864710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.864742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.864931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.864972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.865914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.865945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.866169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.866189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.866339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.866371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.866551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.866583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.866761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.866793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.866994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.867028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.867214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.867230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.867320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.867334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.867485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.867517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.867621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.867652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.867824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.867856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.868027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.868043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.868216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.868250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.868432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.868465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.585 [2024-11-20 10:44:12.868651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.585 [2024-11-20 10:44:12.868684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.585 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.868802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.868833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.869008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.869041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.869209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.869241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.869416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.869448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.869574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.869605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.869786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.870005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.870041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.870169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.870201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.870447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.870479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.870686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.870718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.870826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.870857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.871060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.871093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.871260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.871293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.871474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.871492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.871642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.871658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.871803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.871820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.871968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.871984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.872060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.872075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.872215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.872230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.872312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.872326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.872481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.872496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.872651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.872684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.872868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.872900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.873082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.873116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.873212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.873368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.873384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.873474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.873489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.873637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.873679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.873917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.873963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.874104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.874259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.874276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.874426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.874443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.874584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.874617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.874801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.874833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.875013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.875047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.875151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.875184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.875386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.586 [2024-11-20 10:44:12.875403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.586 qpair failed and we were unable to recover it. 00:27:12.586 [2024-11-20 10:44:12.875552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.875568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.875635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.875650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.875859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.875893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.876032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.876066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.876181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.876214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.876405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.876421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.876507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.876521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.876653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.876669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.876841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.876873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.877044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.877077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.877201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.877234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.877360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.877401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.877582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.877597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.877729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.877744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.877899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.877915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.878079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.878181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.878341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.878448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.878654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.878843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.878990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.879023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.879132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.879164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.879256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.879271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.879357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.879371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.879455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.879490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.879762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.879795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.879981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.880020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.880096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.880110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.880269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.880305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.880570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.880602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.880798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.880831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.880997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.587 [2024-11-20 10:44:12.881030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.587 qpair failed and we were unable to recover it. 00:27:12.587 [2024-11-20 10:44:12.881237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.881270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.881451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.881483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.881726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.881766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.881933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.881981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.882098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.882114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.882344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.882359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.882452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.882466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.882687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.882718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.882899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.882930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.883183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.883199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.883338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.883378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.883492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.883529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.883642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.883676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.883854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.883885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.884053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.884088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.884291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.884467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.884482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.884569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.884583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.884752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.884784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.884961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.884995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.885183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.885215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.885443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.885458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.885593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.885608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.885830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.885863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.886052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.886087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.886329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.886360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.886591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.886607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.886709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.886724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.886882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.886914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.887885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.887900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.888116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.888132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.588 [2024-11-20 10:44:12.888228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.588 [2024-11-20 10:44:12.888244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.588 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.888388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.888406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.888539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.888555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.888645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.888661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.888827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.888842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.888986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.889002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.889102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.889118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.889221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.889252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.889355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.889387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.889622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.889653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.889777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.889808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.889994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.890031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.890142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.890174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.890308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.890340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.890472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.890488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.890630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.890674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.890847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.890880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.891019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.891052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.891184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.891199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.891302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.891317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.891452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.891493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.891731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.891763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.891882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.891914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.892105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.892138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.892324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.892355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.892480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.892511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.892632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.892665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.892913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.892945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.893133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.893165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.893361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.893394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.893653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.893668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.893874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.893906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.894098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.894115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.894200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.894214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.894369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.894384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.894547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.894580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.589 [2024-11-20 10:44:12.894699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.589 [2024-11-20 10:44:12.894731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.589 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.894945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.895007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.895176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.895191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.895327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.895342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.895595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.895742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.895773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.895926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.896131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.896164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.896336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.896367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.896602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.896635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.896873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.896905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.897089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.897121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.897239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.897254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.897400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.897417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.897505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.897541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.897822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.897854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.898970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.898987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.899909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.899940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.900082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.900115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.900309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.900346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.900461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.900493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.900730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.900761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.901001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.901033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.901217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.901249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-11-20 10:44:12.901426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.590 [2024-11-20 10:44:12.901458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.901697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.901729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.901911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.901943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.902246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.902381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.902396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.902531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.902546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.902779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.902810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.902934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.902988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.903164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.903196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.903399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.903432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.903610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.903643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.903823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.903856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.903976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.904010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.904129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.904160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.904346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.904378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.904563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.904579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.904735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.904750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.904833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.904847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.905060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.905092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.905374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.905406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.905708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.905740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.905868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.905901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.906154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.906196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.906373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.906406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.906615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.906818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.906834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.906921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.906935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.907096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.907112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.907191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.907204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.907377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.907392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.907477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.907515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.907701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.907733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.907855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.907887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.908022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.591 [2024-11-20 10:44:12.908057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-11-20 10:44:12.908179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.908210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.908383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.908399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.908491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.908531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.908742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.908773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.908891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.908924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.909962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.909980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.910116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.910132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.910295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.910327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.910435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.910466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.910592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.910629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.910802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.910834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.910964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.910997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.911204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.911220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.911358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.911373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.911574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.911590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.911745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.911961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.911977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.912076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.912110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.912221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.912252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.912381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.912412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.912522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.912553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.912787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.912820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.913066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.913208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.913362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.913519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.913714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.913863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.913990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.914025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.914145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.914179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.914427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.914443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.914522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.592 [2024-11-20 10:44:12.914536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-11-20 10:44:12.914615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.914630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.914766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.914783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.914896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.914928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.915119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.915152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.915327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.915358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.915469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.915485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.915678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.915711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.915832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.915865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.916039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.916073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.916201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.916234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.916354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.916386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.916540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.916686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.916702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.916922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.916937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.917172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.917187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.917329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.917363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.917477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.917509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.917632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.917666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.917807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.917842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.918015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.918052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.918321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.918338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.918547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.918563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.918641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.918657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.918738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.918752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.918912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.918927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.919135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.919151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.919248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.919263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.919478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.919510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.919628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.919661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.919780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.919812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.920883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.920915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.921111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.921144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.593 [2024-11-20 10:44:12.921329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.593 [2024-11-20 10:44:12.921360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.593 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.921619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.921634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.921779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.921813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.922117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.922248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.922280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.922401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.922433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.922671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.922703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.922884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.922923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.923086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.923119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.923382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.923413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.923589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.923621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.923800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.923833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.923934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.923977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.924091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.924124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.924241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.924274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.924455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.924489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.924669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.924685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.924772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.924786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.924881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.925089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.925105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.925250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.925266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.925477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.925509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.925761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.925792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.925906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.925938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.926119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.926417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.926707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.926799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.926907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.926993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.927008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.927142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.927160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.927249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.927264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.927472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.927488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.927638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.927657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.927805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.927839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.927994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.928028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.928297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.928331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.594 [2024-11-20 10:44:12.928442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.594 [2024-11-20 10:44:12.928475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.594 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.928679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.928693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.928841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.928857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.928989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.929151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.929306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.929394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.929556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.929879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.929894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.930053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.930089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.930212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.930245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.930415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.930447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.930636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.930668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.930799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.930831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.931004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.931041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.931171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.931206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.931389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.931405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.931550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.931583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.931819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.931851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.932032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.932065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.932310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.932341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.932566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.932581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.932713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.932729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.932879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.932896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.933044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.933060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.933131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.933146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.933320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.933335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.933485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.933518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.933639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.933672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.933884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.933915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.934168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.934201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.934384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.934400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.934492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.934507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.934666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.934682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.934838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.934918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.934933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.935080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.935151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.595 [2024-11-20 10:44:12.935442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.595 [2024-11-20 10:44:12.935479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.595 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.935672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.935704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.935916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.935967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.936148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.936167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.936401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.936433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.936552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.936585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.936767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.936800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.936975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.937009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.937190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.937224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.937367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.937572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.937604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.937703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.937720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.937929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.937980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.938186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.938221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.938335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.938367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.938474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.938506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.938711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.938745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.938930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.938978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.939162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.939195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.939386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.939401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.939570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.939602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.939788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.939820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.939941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.939997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.940109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.940141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.940259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.940292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.940461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.940493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.940726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.940743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.940883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.940899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.940986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.941075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.941254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.941402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.941517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.941746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.941965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.941998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 10:44:12.942113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 10:44:12.942146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.942403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.942435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.942551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.942565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.942751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.942766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.942971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.942990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.943900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.943915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.944077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.944093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.944174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.944209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.944453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.944484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.944695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.944727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.944924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.944940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.945936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.945957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.946119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.946151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.946415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.946447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.946627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.946658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.946783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.946815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.947028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.947064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.947262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.947296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.947463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.947480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.947685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.947700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.947804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.947822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.947969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.947986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.948119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.948135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.948288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.948304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.948383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.948398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.948471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.948486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.948554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 10:44:12.948569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 10:44:12.948744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.948759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.948898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.948931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.949111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.949143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.949263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.949295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.949414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.949445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.949617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.949633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.949707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.949724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.949878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.949893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.950057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.950160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.950323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.950473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.950573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.950966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.951012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.951201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.951234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.951345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.951377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.951571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.951586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.951749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.951782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.952010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.952219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.952251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.952429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.952460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.952577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.952617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.952764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.952779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.953890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.953922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.954100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.954133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.954312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.954343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.954496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.954512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.954696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.954712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.954845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.954861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.955001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.598 [2024-11-20 10:44:12.955019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.598 qpair failed and we were unable to recover it. 00:27:12.598 [2024-11-20 10:44:12.955100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.955252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.955268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.955354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.955368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.955536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.955552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.955719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.955752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.955927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.955969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.956151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.956183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.956290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.956306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.956444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.956461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.956691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.956708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.956849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.956865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.957870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.957903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.958084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.958117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.958292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.958325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.958440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.958472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.958645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.958683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.958834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.958849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.959001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.959019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.959251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.959284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.959468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.959501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.959616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.959648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.959821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.960036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.960069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.960202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.960234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.960402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.960419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.960611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.960626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.960764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.960780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.961011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.961044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.961163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.961196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.961368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.961401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.961513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.961531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.961620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.599 [2024-11-20 10:44:12.961635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.599 qpair failed and we were unable to recover it. 00:27:12.599 [2024-11-20 10:44:12.961722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.961736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.961875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.961891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.961968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.961983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.962919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.962966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.963230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.963262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.963443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.963703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.963718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.963889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.963905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.964909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.964941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.965063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.965096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.965269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.965301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.965409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.965441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.965652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.965668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.965756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.965774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.965929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.965983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.966114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.966146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.966257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.966298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.966445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.966461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.966550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.966565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.966634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.966648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.966882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.966916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.967049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.600 [2024-11-20 10:44:12.967222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.600 [2024-11-20 10:44:12.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.600 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.967364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.967396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.967518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.967549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.967655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.967688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.967787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.967819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.968012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.968046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.968161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.968194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.968303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.968335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.968492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.968508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.968742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.968775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.968895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.968927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.969117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.969150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.969328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.969343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.969532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.969547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.969693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.969709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.969847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.969863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.969940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.969960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.970962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.970984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.971074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.971090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.971289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.971305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.971449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.971466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.971603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.971618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.971774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.971790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.971876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.971891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.601 qpair failed and we were unable to recover it. 00:27:12.601 [2024-11-20 10:44:12.972971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.601 [2024-11-20 10:44:12.972987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.973071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.973086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.973164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.973178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.973316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.973331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.973543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.973574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.973701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.973732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.973901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.973940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.974129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.974163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.974280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.974314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.974575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.974608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.974786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.974803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.974980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.974998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.975940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.975978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.976135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.976152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.976232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.976247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.976337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.976354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.976437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.976452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.602 [2024-11-20 10:44:12.976587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.602 [2024-11-20 10:44:12.976603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.602 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.976693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.976708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.976866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.976907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.977101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.977134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.977248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.977280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.977564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.977659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.977700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.977937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.977980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.978227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.978379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.978459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.978545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.978657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.978778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.978983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.979021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.979146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.979178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.979376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.979410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.979551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.979583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.979759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.979791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.979900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.979933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.980067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.980109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.980353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.980385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.980568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.980602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.980859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.980876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.981960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.981976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.982144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.982177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.982350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.603 [2024-11-20 10:44:12.982382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.603 qpair failed and we were unable to recover it. 00:27:12.603 [2024-11-20 10:44:12.982579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.982610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.982724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.982757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.982878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.982910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.983133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.983169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.983407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.983439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.983554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.983570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.983779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.983811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.983935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.984005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.984252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.984285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.984552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.984584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.984698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.984730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.984905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.984920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.985075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.985092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.985308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.985324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.985468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.985484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.985575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.985591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.985764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.985779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.985864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.985880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.986026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.986042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.986133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.986174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.986393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.986425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.986599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.986637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.986764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.986796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.986986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.987025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.987200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.987231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.987406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.987422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.987650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.987683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.987798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.987830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.987945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.987988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.988105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.988138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.988308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.988341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.988517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.988549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.988679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.988713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.988919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.604 [2024-11-20 10:44:12.988936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.604 qpair failed and we were unable to recover it. 00:27:12.604 [2024-11-20 10:44:12.989007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.989021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.989184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.989200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.989404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.989420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.989558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.989598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.989724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.989755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.989988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.990198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.990230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.990410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.990442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.990620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.990652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.990746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.990761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.990899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.990915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.991106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.991265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.991425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.991621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.991793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.991910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.991998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.992013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.992159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.992231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.992551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.992587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.992782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.992814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.993025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.993061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.993189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.993220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.993407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.993438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.993627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.993660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.993769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.993800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.993915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.993931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.994136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.994153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.994266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.605 [2024-11-20 10:44:12.994533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.605 [2024-11-20 10:44:12.994565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.605 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.994885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.994916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.995200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.995235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.995421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.995437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.995595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.995628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.995832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.995865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.996059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.996095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.996333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.996365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.996485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.996517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.996700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.996732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.996910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.996942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.997155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.997188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.997363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.997395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.997703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.997736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.997940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.997982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.998116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.998149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.998326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.998359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.998489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.998521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.998714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.998729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.998804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.998819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.998968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.999001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.999204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.999236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.999346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.999378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.999561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.999592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.999705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.999737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:12.999862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:12.999895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:13.000032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:13.000074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.606 [2024-11-20 10:44:13.000254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.606 [2024-11-20 10:44:13.000293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.606 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.000390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.000405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.000480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.000495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.000584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.000614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.000801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.000834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.001031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.001065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.001301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.001334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.001502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.001518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.001683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.001699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.001842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.001878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.002120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.002154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.002343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.002375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.002582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.002598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.002808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.002839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.003088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.003257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.003289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.003412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.003445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.003633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.003666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.003795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.003828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.004894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.004912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.005074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.005091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.005230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.005246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.005324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.005339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.005429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.005445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.005594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.005610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.607 qpair failed and we were unable to recover it. 00:27:12.607 [2024-11-20 10:44:13.005766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.607 [2024-11-20 10:44:13.005799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.005916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.005957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.006131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.006163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.006337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.006370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.006565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.006597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.006766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.006797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.006993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.007008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.007104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.007118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.007218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.007234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.007396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.007428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.007553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.007586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.007701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.007733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.007998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.008034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.008212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.008246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.008355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.008388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.008594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.008609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.008693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.008708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.008935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.008956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.009065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.009236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.009269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.009535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.009567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.009671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.009690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.009854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.009869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.010885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.608 [2024-11-20 10:44:13.010901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.608 qpair failed and we were unable to recover it. 00:27:12.608 [2024-11-20 10:44:13.011070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.011087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.011252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.011284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.011465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.011496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.011687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.011719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.011847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.011862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.012021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.012038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.012111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.012125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.012202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.012216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.012494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.012527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.012715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.012747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.012844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.012859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.013874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.013907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.014107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.014141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.014324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.014357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.014460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.014474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.014635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.014652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.014731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.014746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.014883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.015102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.015119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.015211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.015226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.015373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.015389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.015467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.015482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.015621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.609 [2024-11-20 10:44:13.015637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.609 qpair failed and we were unable to recover it. 00:27:12.609 [2024-11-20 10:44:13.015771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.015787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.015862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.015878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.015961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.015979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.016074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.016106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.016240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.016272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.016512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.016544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.016726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.016759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.016868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.016901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.017102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.017136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.017380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.017411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.017694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.017966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.017983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.018186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.018202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.018338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.018354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.018428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.018443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.018586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.018603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.018764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.018808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.018989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.019024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.019161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.019195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.019326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.019358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.019593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.019625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.019747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.019763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.019853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.019869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.020008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.020025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.020159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.020176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.020386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.020417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.020603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.020635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.020762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.020795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.021033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.021068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.021189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.610 [2024-11-20 10:44:13.021221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.610 qpair failed and we were unable to recover it. 00:27:12.610 [2024-11-20 10:44:13.021413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.021451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.021659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.021692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.021801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.021816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.021883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.021898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.022035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.022051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.022208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.022242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.022349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.022382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.022497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.022529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.022765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.022798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.022974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.023008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.023237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.023352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.023384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.023618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.023634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.023820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.023853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.024045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.024084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.024260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.024294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.024535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.024567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.024680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.024712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.024839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.024871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.025111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.025145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.025336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.025369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.025546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.025579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.025711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.025746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.025916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.025932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.026008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.026022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.026171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.026187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.026332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.026364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.026610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.026654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.026846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.026886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.026982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.026998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.027169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.027200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.027468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.611 [2024-11-20 10:44:13.027510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.611 qpair failed and we were unable to recover it. 00:27:12.611 [2024-11-20 10:44:13.027648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.027664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.027758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.027774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.027911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.027926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.028141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.028159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.028298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.028330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.028440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.028473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.028630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.028646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.028786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.028802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.029010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.029044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.029253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.029286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.029404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.029436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.029560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.029592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.029788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.029821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.030062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.030095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.030289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.030322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.030488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.030504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.030593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.030607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.030819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.030891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.031129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.031168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.031288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.031321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.031419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.031436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.031642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.031658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.031746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.031765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.031987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.032004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.032205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.032221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.032358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.032374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 10:44:13.032528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.612 [2024-11-20 10:44:13.032543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.032620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.032635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.032732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.032765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.032894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.032926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.033039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.033072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.033316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.033349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.033460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.033493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.033608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.033640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.033856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.033889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.034124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.034156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.034281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.034314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.034487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.034519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.034692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.034724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.034909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.034927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.035960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.035980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.036081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.036097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.036263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.036295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.036428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.036460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.036644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.036677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.036865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.036897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.037151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.037184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.037308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.037340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.037512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.037545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.037715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.037747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.037925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.037940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.038097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.038115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.038263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.038279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 10:44:13.038438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.613 [2024-11-20 10:44:13.038455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.038630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.038663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.038780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.038812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.038942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.038985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.039946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.039972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.040145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.040240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.040340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.040493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.040654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.040867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.040997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.041031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.041153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.041185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.041363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.041396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.041803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.041834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.041912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.041926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.042869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.042901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.043025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.043060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.043258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.043296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.043529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.043545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.043685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.043719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 10:44:13.043848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.614 [2024-11-20 10:44:13.043880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.044067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.044104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.044297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.044331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.044442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.044458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.044663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.044695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.044942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.045009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.045250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.045282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.045476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.045493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.045651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.045667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.045850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.045881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.046097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.046132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.046310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.046344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.046461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.046494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.046663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.046695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.046876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.046908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.047090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.047124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.047310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.047343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.047459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.047491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.047670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.047703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.047891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.047923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.048955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.048971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.049108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.049124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.049274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.049290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.049395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.049410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.615 [2024-11-20 10:44:13.049557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.615 [2024-11-20 10:44:13.049572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.049647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.049662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.049889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.049921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.050121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4af0 is same with the state(6) to be set 00:27:12.616 [2024-11-20 10:44:13.050417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.050489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.050640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.050976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.051115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.051238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.051423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.051585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.051746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.051886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.051918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.052192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.052226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.052397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.052430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.052627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.052670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.052738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.052753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.052847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.052863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.053995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.054012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.054148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.054164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.054251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.054267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.054400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.054416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.054558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.054574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.616 qpair failed and we were unable to recover it. 00:27:12.616 [2024-11-20 10:44:13.054660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.616 [2024-11-20 10:44:13.054676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.054748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.054763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.054842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.054857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.055018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.055052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.055175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.055207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.055311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.055343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.055522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.055555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.055736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.055767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.055941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.056194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.056314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.056345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.056470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.056503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.056680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.056695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.056847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.056879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.056994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.057027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.057208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.057240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.057349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.057380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.057634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.057667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.057837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.057869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.058212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.058284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.058411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.058448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.058641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.058673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.058802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.058836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.058971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.059247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.059280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.059443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.059461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.059597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.059614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.059761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.617 [2024-11-20 10:44:13.059776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.617 qpair failed and we were unable to recover it. 00:27:12.617 [2024-11-20 10:44:13.059878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.059911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.060152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.060186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.060319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.060351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.060538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.060571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.060828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.060859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.061098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.061115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.061274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.061291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.061360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.061374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.061602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.061617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.061702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.061719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.061873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.061888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.062044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.062212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.062372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.062483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.062563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.062803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.062989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.063024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.063142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.063178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.063299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.063333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.063519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.063551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.063663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.063695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.063939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.063981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.064105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.064138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.064309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.064341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.064530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.064564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.064801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.064957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.064976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.065066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.065082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.065283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.065299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.618 [2024-11-20 10:44:13.065448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.618 [2024-11-20 10:44:13.065464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.618 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.065535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.065550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.065694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.065726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.065930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.065978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.066158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.066193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.066320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.066352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.066529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.066562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.066845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.066878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.067080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.067097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.067288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.067411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.067443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.067634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.067667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.067906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.067939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.068138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.068170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.068296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.068329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.068460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.068497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.068620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.068653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.068824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.068856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.068943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.068964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.069147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.069181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.069425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.069457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.069590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.069622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.069833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.069849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.069927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.069941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.070045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.070075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.070233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.070267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.070471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.070503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.070622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.070655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.070797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.070813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.070956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.070974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.071185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.071216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.071349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.071381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.619 [2024-11-20 10:44:13.071552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.619 [2024-11-20 10:44:13.071584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.619 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.071785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.071818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.072018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.072054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.072291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.072322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.072502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.072536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.072740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.072774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.073020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.073037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.073174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.073206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.073342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.073373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.073572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.073604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.073871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.073906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.074199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.074233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.074361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.074393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.074507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.074526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.074682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.074698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.074865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.074896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.075040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.075074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.075187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.075218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.075459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.075492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.075601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.075633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.075790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.075805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.076016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.076050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.076293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.076324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.076441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.076474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.076607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.076641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.076884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.076900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.077047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.077064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.077148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.077162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.077240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.077255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.077396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.620 [2024-11-20 10:44:13.077413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.620 qpair failed and we were unable to recover it. 00:27:12.620 [2024-11-20 10:44:13.077505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.077534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.077773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.077804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.077921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.077989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.078265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.078280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.078422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.078455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.078585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.078617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.078815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.078847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.079036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.079057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.079278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.079495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.079528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.079637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.079670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.079799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.079830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.079999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.080015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.080087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.080100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.080191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.080206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.080340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.080376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.080562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.080594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.080787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.080819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.081076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.081093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.081242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.081274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.081454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.081486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.081610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.081642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.081885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.081917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.082168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.082200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.082454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.082487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.082695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.082711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.082851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.082867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.082945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.082975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.083156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.083188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.083321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.621 [2024-11-20 10:44:13.083353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.621 qpair failed and we were unable to recover it. 00:27:12.621 [2024-11-20 10:44:13.083480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.083513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.083644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.083676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.083926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.083942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.084169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.084184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.084334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.084372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.084577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.084609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.084788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.084820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.084933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.084978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.085100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.085133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.085331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.085488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.085520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.085704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.085720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.085933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.085976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.086169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.086201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.086370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.086402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.086601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.086633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.086750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.086791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.086868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.086882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.087030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.087047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.087201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.087218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.087360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.087393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.087519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.087549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.087734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.087765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.087957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.087974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.088849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.088864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.089004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.089022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.089105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.622 [2024-11-20 10:44:13.089121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.622 qpair failed and we were unable to recover it. 00:27:12.622 [2024-11-20 10:44:13.089255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.089272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.089352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.089366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.089564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.089580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.089659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.089673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.089830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.089863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.090074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.090107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.090219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.090250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.090412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.090534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.090566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.090697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.090729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.090996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.091032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.091228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.091261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.091389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.091421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.091536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.091569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.091662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.091677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.091882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.091971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.092213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.092250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.092478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.092517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.092660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.092692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.092825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.092858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.093053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.093086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.093259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.093291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.093462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.093494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.093671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.093687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.093825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.093908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.093923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.094077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.094295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.094327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.094451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.623 [2024-11-20 10:44:13.094484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.623 qpair failed and we were unable to recover it. 00:27:12.623 [2024-11-20 10:44:13.094596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.094612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.094759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.094775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.094941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.094997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.095184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.095216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.095331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.095364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.095612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.095643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.095988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.096170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.096186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.096333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.096350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.096421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.096435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.096602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.096617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.096765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.096781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.096851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.096865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.097010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.097025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.097096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.097204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.097218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.097423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.097456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.097616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.097632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.097783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.097814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.098001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.098034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.098163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.098195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.098390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.098422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.098528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.098561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.098736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.098768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.098958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.098978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.099149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.099164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.099320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.099352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.099549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.099581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.099702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.099733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.099864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.624 [2024-11-20 10:44:13.099881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.624 qpair failed and we were unable to recover it. 00:27:12.624 [2024-11-20 10:44:13.100034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.100050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.100190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.100205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.100273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.100288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.100424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.100440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.100585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.100616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.100819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.101030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.101063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.101242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.101286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.101401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.101435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.101680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.101712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.101888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.101920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.102112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.102128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.102341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.102373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.102484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.102516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.102779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.102812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.103920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.103985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.104112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.104143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.104323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.104355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.104460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.104491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.104666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.104697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.104805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.104822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.104972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.104989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.105054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.105068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.625 [2024-11-20 10:44:13.105136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.625 [2024-11-20 10:44:13.105150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.625 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.105252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.105266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.105362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.105377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.105471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.105502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.105683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.105715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.105966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.106765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.106797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.107014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.107049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.107168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.107201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.107386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.107418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.107534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.107568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.107740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.107897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.107929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.108130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.108163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.108355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.108387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.108654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.108978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.109011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.109208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.109224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.109299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.109337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.109455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.109488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.109675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.109690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.109844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.109860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.110007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.110024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.110177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.110210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.110386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.110418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.110537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.110569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.626 qpair failed and we were unable to recover it. 00:27:12.626 [2024-11-20 10:44:13.110692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.626 [2024-11-20 10:44:13.110729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.110820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.110835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.110917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.110932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.111042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.111058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.111139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.111154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.111258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.111291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.111414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.111446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.111656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.111690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.111867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.111900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.112098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.112114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.112266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.112299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.112425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.112458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.112637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.112669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.112802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.112960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.112977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.113068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.113082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.113283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.113299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.113512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.113658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.113690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.113935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.113977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.114201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.114371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.114386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.114483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.114514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.114625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.114657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.114765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.114796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.114908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.114924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.115122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.115158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.115279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.115310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.115491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.115524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.115630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.115662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.115774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.115790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.116059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.116135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.116364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.116400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.627 [2024-11-20 10:44:13.116609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.627 [2024-11-20 10:44:13.116642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.627 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.116785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.116817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.116993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.117027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.117291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.117323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.117523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.117677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.117709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.117962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.117996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.118111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.118130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.118306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.118322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.118475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.118508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.118621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.118653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.118785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.118818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.119931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.119973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.120159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.120191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.120375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.120407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.120666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.120698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.120907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.120941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.121092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.121126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.121330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.121452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.121483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.121590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.121622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.121833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.122039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.122056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.628 [2024-11-20 10:44:13.122262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.628 [2024-11-20 10:44:13.122294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.628 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.122420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.122451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.122653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.122773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.122806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.122916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.122959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.123090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.123122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.123288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.123327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.123451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.123484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.123607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.123639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.123808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.123839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.124966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.124984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.125146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.125177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.126514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.126547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.126751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.126833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.126848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.126992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.127009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.127208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.127224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.127320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.127334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.127491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.127507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.127658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.127675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.127758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.127773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.129097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.129126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.129289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.129306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.129535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.129552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.629 qpair failed and we were unable to recover it. 00:27:12.629 [2024-11-20 10:44:13.129657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.629 [2024-11-20 10:44:13.129673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.129832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.129865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.130015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.130057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.130307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.130340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.130458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.130490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.130707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.130739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.130938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.130984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.131115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.131149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.131354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.131387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.131572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.131604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.131781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.131897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.131911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.131996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.132850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.132864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.133008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.133043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.133233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.133265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.133442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.133474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.133584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.133615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.133744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.133787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.133996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.630 qpair failed and we were unable to recover it. 00:27:12.630 [2024-11-20 10:44:13.134936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.630 [2024-11-20 10:44:13.134989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.135201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.135233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.135356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.135388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.135626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.135657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.135841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.135874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.135999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.136014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.136084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.136098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.136254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.136325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.136462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.136498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.136675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.136707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.136942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.136970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.137119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.137135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.137275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.137291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.137467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.137482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.137639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.137670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.137847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.137879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.138002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.138035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.138158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.138190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.138313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.138344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.138468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.138500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.138681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.138715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.139011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.139043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.139169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.139202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.139318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.139352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.139463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.139496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.139635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.139669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.139842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.139875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.140074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.140106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.140275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.140306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.140496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.631 [2024-11-20 10:44:13.140528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 10:44:13.140704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.140720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.140884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.140916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.141057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.141093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.141295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.141327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.141576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.141609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.141792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.141823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.142002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.142036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.142146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.142179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.142489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.142560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.142742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.142812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.143009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.143048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.143213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.143249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.143424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.143456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.143581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.143614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.143868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.143901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.144023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.144058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.144244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.144277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.144424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.144657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.144928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.144979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.145194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.145227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.145412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.145444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.145641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.145675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.145799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.145831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.146011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.146045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.146233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.146266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.146374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.146406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.146585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.146618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.146801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.146834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.146973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.147007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.147140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.632 [2024-11-20 10:44:13.147171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 10:44:13.147305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.147519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.147552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.147667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.147700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.147825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.147841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.147939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.147958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.148129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.148145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.148214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.148228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.148378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.148395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.148484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.148498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.148630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.148647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.148887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.148904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.149929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.149973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.150184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.150219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.151324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.151354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.151530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.151548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.151730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.151764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.152041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.152077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.152271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.152305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.152501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.152533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.152721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.152754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.152991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.153026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.153152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.153185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.153370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.153404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.153537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.153569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.153746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.633 [2024-11-20 10:44:13.153762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 10:44:13.153902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.153922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.154929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.154945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.155897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.155914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.156062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.156079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.156285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.156302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.156387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.156403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.156473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.156489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.156641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.156674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.156869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.156902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.157061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.157095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.157264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.157282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.157363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.157378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.634 [2024-11-20 10:44:13.157515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.634 [2024-11-20 10:44:13.157531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.634 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.157679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.157719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.157897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.157937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.158095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.158253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.158287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.158478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.158511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.158694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.158727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.158993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.159155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.159364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.159581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.159687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.159797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.159902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.159919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.160014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.160029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.160232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.160249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.160388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.160403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.160597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.160631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.160762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.160795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.160934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.160956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.161957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.161978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.162060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.162076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.162142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.162158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.162241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.162255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.162392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.162423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.162602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.162637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.635 qpair failed and we were unable to recover it. 00:27:12.635 [2024-11-20 10:44:13.162760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.635 [2024-11-20 10:44:13.162794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.162988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.163022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.163117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.163134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.163215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.163230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.163460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.163476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.163675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.163692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.163865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.163881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.164025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.164044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.164208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.164224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.164373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.164420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.164595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.164762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.164794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.165605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.165635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.165786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.165804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.165959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.165981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.166909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.166992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.167937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.167957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.168048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.168065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.636 [2024-11-20 10:44:13.168224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.636 [2024-11-20 10:44:13.168239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.636 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.168329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.168482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.168639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.168735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.168838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.168918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.168995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.169842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.169984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.170946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.170981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.171890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.171909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.172046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.637 [2024-11-20 10:44:13.172063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.637 qpair failed and we were unable to recover it. 00:27:12.637 [2024-11-20 10:44:13.172150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.172847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.172985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.173854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.173989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.174910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.174924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.175912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.175985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.176000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.176070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.176085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.176150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.176165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.638 [2024-11-20 10:44:13.176241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.638 [2024-11-20 10:44:13.176256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.638 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.176458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.176474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.176551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.176567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.176632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.176720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.176735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.176803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.176818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.176893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.176907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.177926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.177942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.178919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.178935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.179097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.179114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.179250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.179265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.179416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.179433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.179569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.639 [2024-11-20 10:44:13.179584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.639 qpair failed and we were unable to recover it. 00:27:12.639 [2024-11-20 10:44:13.179721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.179736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.179808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.179824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.179900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.179917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.180098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.180115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.180277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.180293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.180363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.180378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.180462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.180478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.180704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.180719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.180969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.180985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.181914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.181930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.182904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.182920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.183860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.183883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.640 [2024-11-20 10:44:13.184771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.640 [2024-11-20 10:44:13.184788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.640 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.184872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.184888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.185934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.185955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.186101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.186118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.186202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.186218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.186426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.186441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.186578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.186594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.186793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.186809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.186905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.186920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.187952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.187968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.188906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.188988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.189002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.189172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.189310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.189326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.189485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.189500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.189578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.641 [2024-11-20 10:44:13.189593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.641 qpair failed and we were unable to recover it. 00:27:12.641 [2024-11-20 10:44:13.189737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.189752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.189832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.189848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.189925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.189941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.190894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.190910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.191844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.191859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.192038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.192054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.192142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.192157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.192292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.192307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.192448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.192464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.192684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.192699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.192843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.192860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.642 [2024-11-20 10:44:13.193927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.642 [2024-11-20 10:44:13.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.642 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.194908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.195896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.195911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.196919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.197848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.197864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.643 [2024-11-20 10:44:13.198722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.643 qpair failed and we were unable to recover it. 00:27:12.643 [2024-11-20 10:44:13.198809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.198825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.198917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.199935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.199979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.200940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.200960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.201921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.201937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.202035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.202051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.644 qpair failed and we were unable to recover it. 00:27:12.644 [2024-11-20 10:44:13.202151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.644 [2024-11-20 10:44:13.202167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.202329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.202419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.202435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.202510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.202526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.202629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.202646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.202791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.202807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.202985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.203978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.203995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.204249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.204265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.204345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.204361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.204451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.204467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.204618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.204634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.204773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.204845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.205857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.205873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.206925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.206941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.207111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.207128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.207213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.207229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.645 qpair failed and we were unable to recover it. 00:27:12.645 [2024-11-20 10:44:13.207384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.645 [2024-11-20 10:44:13.207399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.207490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.207506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.207642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.207657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.207803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.207888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.207904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.208954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.209976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.209992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.210944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.210968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.211922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.211938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.646 [2024-11-20 10:44:13.212017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.646 [2024-11-20 10:44:13.212033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.646 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.212916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.212996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.213950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.213965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.214962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.214983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.215847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.215863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.647 [2024-11-20 10:44:13.216813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.647 qpair failed and we were unable to recover it. 00:27:12.647 [2024-11-20 10:44:13.216966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.217941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.217962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.218943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.218970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.219911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.219927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.220851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.220867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.221071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.221087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.221168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.221183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.221255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.221268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.221331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.221345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.221490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.221506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.648 [2024-11-20 10:44:13.221638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.648 [2024-11-20 10:44:13.221654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.648 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.221838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.221854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.222906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.222991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.223914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.223930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.224143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.224160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.224309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.224324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.224492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.224507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.224589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.224605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.224767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.224786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.224935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.224958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.225980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.226075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.649 [2024-11-20 10:44:13.226091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.649 qpair failed and we were unable to recover it. 00:27:12.649 [2024-11-20 10:44:13.226194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.226286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.226457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.226548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.226632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.226743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.226926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.227914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.227930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.228914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.228930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.229970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.229987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.230912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.230928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.650 [2024-11-20 10:44:13.231013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.650 [2024-11-20 10:44:13.231032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.650 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.231968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.231984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.232158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.232173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.232267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.232283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.232483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.232498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.232717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.232853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.232871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.232957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.232974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.233971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.233987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.234921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.234936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.235895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.235911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.236060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.236077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.236229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.236246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.236315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.236329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.236481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.236497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.651 [2024-11-20 10:44:13.236646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.651 [2024-11-20 10:44:13.236661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.651 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.236738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.236754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.236833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.236849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.236963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.237959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.237976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.238915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.238931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.239977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.239993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.240153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.240169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.240245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.240260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.240395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.240410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.240606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.240622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.240713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.240962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.240978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.241049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.241067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.241215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.241231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.241362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.241378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.241577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.241593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.241792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.241808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.241896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.241912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.242122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.242139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.242218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.242234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.242402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.652 [2024-11-20 10:44:13.242417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.652 qpair failed and we were unable to recover it. 00:27:12.652 [2024-11-20 10:44:13.242519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.242535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.242669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.242685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.242830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.242846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.243864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.243880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.244958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.244976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.245903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.245996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.246903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.246919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.653 [2024-11-20 10:44:13.247932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.653 [2024-11-20 10:44:13.247952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.653 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.248937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.248956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.249960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.249976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.250940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.250964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.251928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.251943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.252194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.252277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.252380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.252475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.252636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.654 [2024-11-20 10:44:13.252788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.654 qpair failed and we were unable to recover it. 00:27:12.654 [2024-11-20 10:44:13.252864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.252879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.253854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.253870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.254962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.254982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.255960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.255977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.256144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.256161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.256306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.256322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.256406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.256422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.655 [2024-11-20 10:44:13.256493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.655 [2024-11-20 10:44:13.256508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.655 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.256572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.256587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.256655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.256669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.256834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.256850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.256916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.256931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.257095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.257166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.257306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.257353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.257569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.257602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.257706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.257726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.257873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.257889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.258961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.258976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.259127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.259143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.259231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.259246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.259403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.259418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.259559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.259774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.259789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.259930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.259946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.260115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.260132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.260277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.260292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.260457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.260473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.260613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.260628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.260778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.260794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.260930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.260945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.656 [2024-11-20 10:44:13.261884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.656 [2024-11-20 10:44:13.261900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.656 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.261967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.261983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.262120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.262136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.262287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.262303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.262391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.262406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.262561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.262577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.262735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.262752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.262906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.262922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.263003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.263019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.263104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.263120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.263293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.263309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.657 [2024-11-20 10:44:13.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.657 [2024-11-20 10:44:13.263479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.657 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.263643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.263659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.263805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.263822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.263903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.263919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.264951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.264968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.265050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.265067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.265239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.265322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.265337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.265434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.265450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.265606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.265622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.941 [2024-11-20 10:44:13.265762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.941 [2024-11-20 10:44:13.265778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.941 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.265867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.265883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.265985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.266928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.266945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.267131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.267147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.267244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.267259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.267446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.267461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.267716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.267731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.267981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.268961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.268978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.269969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.269985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.270138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.270222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.270238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.270384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.942 [2024-11-20 10:44:13.270400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.942 qpair failed and we were unable to recover it. 00:27:12.942 [2024-11-20 10:44:13.270489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.270504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.270650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.270666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.270737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.270756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.270822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.270839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.270907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.270924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.271079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.271233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.271339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.271567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.271652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.271830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.271989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.272918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.272933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.273895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.273911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.943 [2024-11-20 10:44:13.274770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.943 qpair failed and we were unable to recover it. 00:27:12.943 [2024-11-20 10:44:13.274841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.274855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.274933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.274954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.275031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.275047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.275256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.275273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.275464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.275480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.275710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.275726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.275861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.275878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.276838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.276853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.277898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.277918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.278975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.278991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.279221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.279237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.279376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.279392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.279490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.279506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.279655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.279671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.279905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.279920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.280062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.944 [2024-11-20 10:44:13.280080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.944 qpair failed and we were unable to recover it. 00:27:12.944 [2024-11-20 10:44:13.280157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.280327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.280434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.280526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.280699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.280855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.280954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.280971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.281915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.281931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.282958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.282975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.283830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.283846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.284015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.284033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.284138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.284154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.284223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.945 [2024-11-20 10:44:13.284240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.945 qpair failed and we were unable to recover it. 00:27:12.945 [2024-11-20 10:44:13.284419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.284435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.284634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.284650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.284730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.284745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.284893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.284910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.285987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.286936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.286957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.287878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.287895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.288028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.288046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.288198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.288214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.288294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.288310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.288401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.288417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.288504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.946 [2024-11-20 10:44:13.288520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.946 qpair failed and we were unable to recover it. 00:27:12.946 [2024-11-20 10:44:13.288606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.288621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.288822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.288837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.288909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.288924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.289869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.289885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.290976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.290990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.291894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.291983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.947 [2024-11-20 10:44:13.292941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.947 [2024-11-20 10:44:13.292960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.947 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.293978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.293995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.294919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.294936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.295842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.295996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.296927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.296943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.297052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.297068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.297226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.297243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.297309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.297324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.297409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.297424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.948 [2024-11-20 10:44:13.297568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.948 [2024-11-20 10:44:13.297584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.948 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.297655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.297670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.297755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.297770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.297837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.297852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.297923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.297940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.298889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.298904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.299977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.299995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.300908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.300923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.949 [2024-11-20 10:44:13.301791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.949 qpair failed and we were unable to recover it. 00:27:12.949 [2024-11-20 10:44:13.301868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.301883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.302963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.302978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.303911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.303926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.304915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.304930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.305079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.305151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.305296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.305341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.305472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.305506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.305697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.305716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.305806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.305874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.305890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.306050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.306068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.306223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.306239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.306320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.306335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.306405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.950 [2024-11-20 10:44:13.306420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.950 qpair failed and we were unable to recover it. 00:27:12.950 [2024-11-20 10:44:13.306501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.306516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.306786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.306802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.306883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.306899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.307933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.307956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.308901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.308916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.309970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.309987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.310076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.310092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.310169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.310185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.310318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.310334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.310423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.310439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.310592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.951 [2024-11-20 10:44:13.310608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.951 qpair failed and we were unable to recover it. 00:27:12.951 [2024-11-20 10:44:13.310697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.310713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.310867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.310883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.311939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.311963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.312905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.312920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.313886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.313901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.314954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.314970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.315055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.315071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.315149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.315164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.315246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.952 [2024-11-20 10:44:13.315261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.952 qpair failed and we were unable to recover it. 00:27:12.952 [2024-11-20 10:44:13.315395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.315410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.315502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.315518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.315662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.315677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.315883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.315899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.316959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.316977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.317957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.317975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.318943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.318964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.319179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.319252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.319402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.319438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.319542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.319559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.319771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.319787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.319884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.319899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.320052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.320071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.953 qpair failed and we were unable to recover it. 00:27:12.953 [2024-11-20 10:44:13.320162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.953 [2024-11-20 10:44:13.320177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.320919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.320934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.321839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.321855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.322865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.322881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.323979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.323996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.324137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.324153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.324224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.324239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.324315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.954 [2024-11-20 10:44:13.324332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.954 qpair failed and we were unable to recover it. 00:27:12.954 [2024-11-20 10:44:13.324486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.324502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.324657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.324674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.324775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.324791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.324880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.324896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.325873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.325889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.326066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.326083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.326154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.326170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.326260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.326277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.326432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.326448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.326656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.326674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.326901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.326917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.327929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.327946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.328930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.328945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.329106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.329123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.329271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.955 [2024-11-20 10:44:13.329287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.955 qpair failed and we were unable to recover it. 00:27:12.955 [2024-11-20 10:44:13.329353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.329368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.329513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.329530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.329665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.329682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.329827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.329842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.329921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.329937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.330094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.330116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.330208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.330223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.330308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.330323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.330461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.330493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.330683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.330837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.330869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.331106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.331140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.331261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.331294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.331412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.331445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.331566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.331612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.331691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.331707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.331854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.331869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.332923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.332965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.333145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.333178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.333414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.333430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.333574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.333592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.333751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.333843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.333859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.333942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.333963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.334774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.334803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.334972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.956 [2024-11-20 10:44:13.334989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.956 qpair failed and we were unable to recover it. 00:27:12.956 [2024-11-20 10:44:13.335089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.335112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.335221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.335238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.335398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.335414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.335493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.335509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.335684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.335840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.335856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.335996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.336014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.336081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.336097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.336283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.336299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.336390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.336432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.336706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.336740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.336890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.337075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.337110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.337237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.337270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.337447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.337479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.337579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.337620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.337764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.337780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.337854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.337869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.338929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.338943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.339918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.339998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.340015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.957 [2024-11-20 10:44:13.340100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.957 [2024-11-20 10:44:13.340115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.957 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.340953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.341942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.341963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.342962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.342979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.343952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.958 [2024-11-20 10:44:13.343973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.958 qpair failed and we were unable to recover it. 00:27:12.958 [2024-11-20 10:44:13.344048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.344813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.344827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.345903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.345917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.346922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.346936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.347716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.347730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.348559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.348591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.348680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.348698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.348784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.959 [2024-11-20 10:44:13.348802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.959 qpair failed and we were unable to recover it. 00:27:12.959 [2024-11-20 10:44:13.349017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.349971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.349987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.350852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.350885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.351016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.351050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.351161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.351195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.351308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.351341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.351450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.351466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.351604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.351621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.352585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.352615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.352841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.352859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.352935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.352976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.353061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.353077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.353236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.353279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.353418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.353707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.353732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.353895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.353922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.354045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.354066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.354160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.354181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.354267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.354289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.354438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.960 [2024-11-20 10:44:13.354455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.960 qpair failed and we were unable to recover it. 00:27:12.960 [2024-11-20 10:44:13.354526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.354541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.354620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.354635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.354787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.354803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.354957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.354974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.355894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.355911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.356891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.356990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.357995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.961 [2024-11-20 10:44:13.358717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.961 qpair failed and we were unable to recover it. 00:27:12.961 [2024-11-20 10:44:13.358798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.358813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.358880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.358896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.358984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.359847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.359998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.360915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.360994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.361926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.361942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.962 [2024-11-20 10:44:13.362819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.962 qpair failed and we were unable to recover it. 00:27:12.962 [2024-11-20 10:44:13.362986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.363938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.363963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.364867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.364883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.365945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.365967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.366971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.366988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.367065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.367080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.367149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.963 [2024-11-20 10:44:13.367165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.963 qpair failed and we were unable to recover it. 00:27:12.963 [2024-11-20 10:44:13.367234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.367250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.367331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.367346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.367481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.367497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.367642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.367657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.367802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.367818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.367904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.367920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.368977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.368993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.369958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.369974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.370878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.370895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.371032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.371051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.371134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.371150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.371230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.371245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.964 [2024-11-20 10:44:13.371322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.964 [2024-11-20 10:44:13.371338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.964 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.371409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.371425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.371565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.371581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.371653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.371742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.371758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.371838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.371907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.371923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.372934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.372973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.373965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.373982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.374971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.965 [2024-11-20 10:44:13.374988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.965 qpair failed and we were unable to recover it. 00:27:12.965 [2024-11-20 10:44:13.375151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.375903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.375993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.376968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.376985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.377913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.377996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.378887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.378988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.379006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.379095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.379110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.379196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.966 [2024-11-20 10:44:13.379213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.966 qpair failed and we were unable to recover it. 00:27:12.966 [2024-11-20 10:44:13.379347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.379363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.379437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.379454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.379540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.379556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.379702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.379718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.379788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.379804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.379890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.379906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.379988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.380907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.380923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.381930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.382908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.382924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.383068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.383084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.383173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.383188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.967 [2024-11-20 10:44:13.383259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.967 [2024-11-20 10:44:13.383274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.967 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.383341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.383358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.383433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.383449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.383587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.383601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.383685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.383846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.383862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.383928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.383943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.384870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.384888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.385975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.385992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.386958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.386974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.387058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.387074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.387210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.387226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.387297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.387312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.387404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.387421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.387504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.968 [2024-11-20 10:44:13.387519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.968 qpair failed and we were unable to recover it. 00:27:12.968 [2024-11-20 10:44:13.387586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.387601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.387668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.387684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.387892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.387908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.388920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.388936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.389857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.389993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.390932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.390953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.969 [2024-11-20 10:44:13.391717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.969 qpair failed and we were unable to recover it. 00:27:12.969 [2024-11-20 10:44:13.391788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.391803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.391960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.391981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.392780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.392999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.393952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.393968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.394918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.394935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.395042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.395209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.395297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.395459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.395560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.970 [2024-11-20 10:44:13.395646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.970 qpair failed and we were unable to recover it. 00:27:12.970 [2024-11-20 10:44:13.395730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.395745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.395917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.395934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.396848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.396863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.397958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.397975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.398937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.398956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.399978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.399996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.400228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.400243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.400329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.400345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.400498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.400514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.400590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.400736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.400751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.971 [2024-11-20 10:44:13.400909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.971 [2024-11-20 10:44:13.400925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.971 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.401085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.401195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.401297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.401463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.401551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.401993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.402097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.402265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.402453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.402619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.402825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.402842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.403043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.403059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.403260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.403274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.403355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.403372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.403511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.403527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.403678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.403694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.403805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.403823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.404937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.404977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.405922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.405938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.406022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.972 [2024-11-20 10:44:13.406038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.972 qpair failed and we were unable to recover it. 00:27:12.972 [2024-11-20 10:44:13.406102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.406912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.406987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.407977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.407993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.408974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.408990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.409975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.409990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.973 [2024-11-20 10:44:13.410761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.973 [2024-11-20 10:44:13.410776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.973 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.410870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.410886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.411922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.411938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.412124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.412141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.412227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.412243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.412481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.412498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.412633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.412648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.412787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.412807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.412944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.413118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.413134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.413285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.413302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.413537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.413552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.413632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.413648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.413795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.413811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.413892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.413907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.414894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.414989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.974 [2024-11-20 10:44:13.415820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.974 [2024-11-20 10:44:13.415835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.974 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.415904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.415919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.416892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.416908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.417954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.417972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.418107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.418122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.418209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.418225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.418444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.418460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.418615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.418630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.418783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.418799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.418935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.418957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.419843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.419858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.420059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.420077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.420277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.420293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.420378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.975 [2024-11-20 10:44:13.420394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.975 qpair failed and we were unable to recover it. 00:27:12.975 [2024-11-20 10:44:13.420533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.420549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.420633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.420649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.420784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.420801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.420975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.420991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.421977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.421992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.422831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.422994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.423960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.423980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.424874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.976 [2024-11-20 10:44:13.424890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.976 qpair failed and we were unable to recover it. 00:27:12.976 [2024-11-20 10:44:13.425032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.425879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.425894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.426807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.426994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.427808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.427984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.428938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.428976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.429071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.429087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.429242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.429259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.429341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.429356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.429492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.429508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.429586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.977 [2024-11-20 10:44:13.429601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.977 qpair failed and we were unable to recover it. 00:27:12.977 [2024-11-20 10:44:13.429690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.429707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.429880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.429895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.430871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.430887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.431020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.431037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.431171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.431188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.431290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.431306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.431465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.431482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.431664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.431680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.431849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.432975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.432992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.433136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.433152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.433290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.433305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.433448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.433463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.433668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.433685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.433920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.433937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.434932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.434957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.435028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.435044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.435121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.435137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.978 [2024-11-20 10:44:13.435303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.978 [2024-11-20 10:44:13.435321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.978 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.435467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.435485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.435567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.435583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.435728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.435745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.435911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.435929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.436011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.436029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.436267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.436283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.436508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.436523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.436685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.436701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.436854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.436870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.436972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.436989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.437211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.437227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.437461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.437479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.437680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.437696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.437858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.437873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.437961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.437977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.438204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.438220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.438374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.438389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.438518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.438757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.438773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.438904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.438920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.439071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.439088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.439290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.439306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.439470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.439739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.439755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.439959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.439981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.440129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.440145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.440212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.440227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.440438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.440453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.440610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.440627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.440849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.440865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.440944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.440965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.441056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.441073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.441246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.441263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.441399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.441550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.979 [2024-11-20 10:44:13.441565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.979 qpair failed and we were unable to recover it. 00:27:12.979 [2024-11-20 10:44:13.441697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.441713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.441799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.441815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.441999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.442139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.442319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.442470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.442583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.442797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.442913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.442929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.443091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.443108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.443337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.443353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.443456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.443471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.443617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.443633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.443798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.443815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.443958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.443975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.444195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.444212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.444364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.444381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.444512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.444528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.444732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.444747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.444917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.444933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.445104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.445122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.445391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.445407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.445578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.445595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.445671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.445685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.445835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.445850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.446026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.446042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.446264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.446280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.446362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.446378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.446523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.446538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.446785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.446802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.446964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.980 [2024-11-20 10:44:13.447143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.980 [2024-11-20 10:44:13.447158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.980 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.447301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.447316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.447479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.447495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.447667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.447683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.447839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.447854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.447937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.447957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.448177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.448193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.448417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.448433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.448583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.448598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.448740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.448756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.448904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.449123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.449280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.449363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.449544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.449723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.449833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.449986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.450209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.450470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.450636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.450797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.450973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.450990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.451172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.451188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.451407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.451423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.451568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.451584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.451732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.451748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.451898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.451914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.452164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.452181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.452344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.452360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.452594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.452610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.452837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.452853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.453053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.453071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.453226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.453243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.453368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.453385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.453468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.453485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.981 [2024-11-20 10:44:13.453559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.981 [2024-11-20 10:44:13.453574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.981 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.453719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.453735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.453873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.453889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.454063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.454080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.454329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.454346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.454574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.454591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.454685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.454701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.454872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.454890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.455091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.455108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.455263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.455280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.455363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.455378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.455580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.455596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.455689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.455705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.455857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.455873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.456021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.456037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.456200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.456217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.456446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.456462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.456544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.456559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.456711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.456727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.456932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.456958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.457122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.457138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.457287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.457304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.457469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.457485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.457708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.457725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.457866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.457883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.458082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.458101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.458274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.458290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.458435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.458452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.458589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.458605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.458706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.458723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.458883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.458899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.459973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.459990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.460100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.460117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.982 qpair failed and we were unable to recover it. 00:27:12.982 [2024-11-20 10:44:13.460196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.982 [2024-11-20 10:44:13.460212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.460384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.460400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.460584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.460599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.460791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.460937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.460964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.461901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.461915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.462121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.462138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.462284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.462300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.462520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.462535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.462700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.462717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.462943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.462966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.463879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.463895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.464063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.464080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.464229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.464246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.464432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.464448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.464620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.464636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.464785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.464801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.465005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.465023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.465179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.465199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.465340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.465356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.465495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.465511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.465656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.465673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.465741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.465755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.466036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.466054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.466217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.466233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.466395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.466411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.466574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.466590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.466796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.466813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.466969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.466986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.467069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.467083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.467185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.467201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.467289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.983 [2024-11-20 10:44:13.467304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.983 qpair failed and we were unable to recover it. 00:27:12.983 [2024-11-20 10:44:13.467396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.467412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.467656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.467672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.467824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.467840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.468012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.468030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.468245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.468261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.468511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.468526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.468669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.468684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.468910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.468926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.469078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.469095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.469258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.469276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.469435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.469451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.469601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.469618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.469791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.469808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.469888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.469907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.470071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.470143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.470351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.470386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.470668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.470687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.470943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.470965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.471166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.471183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.471288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.471304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.471444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.471460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.471630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.471646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.471786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.471802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.471877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.471892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.472049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.472066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.472217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.472234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.472439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.472456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.472615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.472631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.472841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.472858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.473004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.473021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.473191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.473208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.473344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.473360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.473455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.473470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.473677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.473693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.473848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.473864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.474005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.474021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.474101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.474116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.474318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.474334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.474559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.984 [2024-11-20 10:44:13.474575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.984 qpair failed and we were unable to recover it. 00:27:12.984 [2024-11-20 10:44:13.474742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.474759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.474920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.474936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.475071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.475087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.475290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.475307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.475396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.475412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.475695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.475710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.475940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.475963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.476063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.476079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.476320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.476348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.476497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.476514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.476667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.476684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.476844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.476861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.477012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.477030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.477137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.477153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.477330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.477347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.477497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.477514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.477734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.477750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.477958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.477976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.478161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.478176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.478363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.478379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.478623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.478639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.478876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.478893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.479118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.479135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.479286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.479302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.479451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.479467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.479695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.479710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.479795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.479809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.479893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.479908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.480082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.480099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.480266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.480283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.480371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.480387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.480525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.480751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.480767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.480976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.480994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.481217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.481233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.481480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.481496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.481681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.481697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.481841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.481856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.482016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.482032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.482116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.482130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.482272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.482287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.482468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.482484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.482706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.482727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.482959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.985 [2024-11-20 10:44:13.482976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.985 qpair failed and we were unable to recover it. 00:27:12.985 [2024-11-20 10:44:13.483076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.483092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.483320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.483338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.483512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.483529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.483722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.483740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.483974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.483991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.484134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.484151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.484293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.484310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.484391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.484405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.484487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.484501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.484675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.484931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.485099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.485118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.485223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.485239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.485408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.485425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.485511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.485527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.485773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.485791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.485867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.485883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.486166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.486183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.486424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.486440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.486512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.486527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.486750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.486767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.486917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.486934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.487083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.487102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.487334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.487350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.487528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.487544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.487838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.487858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.487996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.488016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.488173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.488191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.488363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.488381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.488534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.488553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.488802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.488819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.488912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.488928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.489088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.489105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.489259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.489275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.489432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.489448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.489656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.489673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.489845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.489862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.490003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.490019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.490225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.490241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.490389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.490407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.490634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.490651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.490813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.490828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.986 [2024-11-20 10:44:13.491063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.986 [2024-11-20 10:44:13.491080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.986 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.491206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.491222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.491408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.491426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.491634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.491650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.491874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.491890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.492132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.492150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.492222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.492236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.492391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.492408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.492552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.492569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.492800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.492817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.492910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.492926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.493179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.493198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.493340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.493356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.493491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.493509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.493672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.493689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.493917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.493934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.494182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.494200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.494430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.494446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.494543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.494559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.494731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.494747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.494979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.494996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.495230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.495247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.495438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.495455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.495609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.495625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.495769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.495786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.495897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.495913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.496227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.496339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.496576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.496749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.496856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.496964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.496984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.497250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.497268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.497415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.497431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.497663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.497679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.497838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.497854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.497967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.497987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.498163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.498180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.498386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.498402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.498625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.498642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.498882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.498899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.499046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.987 [2024-11-20 10:44:13.499063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.987 qpair failed and we were unable to recover it. 00:27:12.987 [2024-11-20 10:44:13.499208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.499225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.499454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.499472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.499655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.499672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.499766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.499781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.500016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.500032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.500273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.500290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.500374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.500390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.500573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.500589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.500749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.500768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.500911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.500927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.501014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.501028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.501161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.501177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.501398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.501416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.501517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.501532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.501673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.501691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.501830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.501847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.502065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.502281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.502299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.502504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.502520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.502803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.502820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.502978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.502995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.503237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.503254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.503478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.503495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.503706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.503723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.503945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.503970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.504146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.504163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.504308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.504324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.504563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.504581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.504759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.504775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.504932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.504955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.505176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.505192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.505338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.505354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.505585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.505603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.505786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.505802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.506057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.506244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.506276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.506552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.506570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.506711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.506728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.506826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.506840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.506984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.507002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.507105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.507120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.507288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.988 [2024-11-20 10:44:13.507305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.988 qpair failed and we were unable to recover it. 00:27:12.988 [2024-11-20 10:44:13.507582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.507600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.507859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.507875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.507973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.507989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.508163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.508179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.508322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.508339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.508480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.508495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.508770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.508787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.509029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.509047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.509283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.509300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.509524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.509541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.509747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.509765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.509938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.509969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.510200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.510217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.510445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.510461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.510551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.510567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.510800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.510816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.510994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.511012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.511158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.511175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.511318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.511334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.511474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.511490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.511656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.511676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.511823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.511839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.512095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.512112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.512386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.512402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.512608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.512625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.512769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.512785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.513014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.513031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.513270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.513287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.513512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.989 [2024-11-20 10:44:13.513530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.989 qpair failed and we were unable to recover it. 00:27:12.989 [2024-11-20 10:44:13.513752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.513771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.514030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.514049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.514223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.514240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.514487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.514575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.514590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.514847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.514865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.515098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.515116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.515294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.515310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.515451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.515467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.515625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.515643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.515870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.515887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.516066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.516083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.516167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.516183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.516344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.516360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.516530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.516546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.516781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.516798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.517007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.517025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.517182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.517199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.517426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.517443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.517652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.517669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.517880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.517898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.518059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.518077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.518163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.518179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.518414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.518432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.518641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.518658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.518885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.518902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.519116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.519134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.519403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.519419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.519565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.519583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.519720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.519737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.519888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.519904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.520066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.520084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.520245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.520263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.520446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.520462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.520690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.520705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.520875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.520892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.521075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.521093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.990 [2024-11-20 10:44:13.521237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.990 [2024-11-20 10:44:13.521254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.990 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.521494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.521510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.521671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.521687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.521842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.521859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.522092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.522110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.522192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.522207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.522465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.522483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.522719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.522736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.522961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.522978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.523137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.523155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.523386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.523403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.523636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.523655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.523810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.523827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.524078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.524096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.524257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.524274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.524439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.524457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.524539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.524554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.524763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.524781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.525016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.525034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.525244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.525261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.525470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.525487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.525597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.525613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.525752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.525773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.525912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.525929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.526098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.526115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.526207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.526222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.526379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.526397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.526625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.526642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.526875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.526893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.527062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.527079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.527295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.527312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.527565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.527581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.527738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.527755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.528030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.528048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.528306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.528323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.528498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.528516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.528740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.528759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.528906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.528923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.529187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.991 [2024-11-20 10:44:13.529204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.991 qpair failed and we were unable to recover it. 00:27:12.991 [2024-11-20 10:44:13.529367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.529383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.529623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.529639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.529833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.529849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.530025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.530045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.530207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.530224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.530371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.530388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.530542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.530559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.530773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.530791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.530944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.530969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.531072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.531091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.531300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.531321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.531581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.531597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.531769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.531786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.532020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.532039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.532338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.532356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.532591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.532608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.532828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.532845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.532995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.533014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.533166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.533183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.533321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.533339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.533492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.533510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.533728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.533745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.533924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.533942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.534206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.534224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.534453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.534470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.534570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.534586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.534755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.534772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.534985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.535003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.535214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.535469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.535489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.535662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.535680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.535840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.535857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.536021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.536039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.536135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.536150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.536433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.536452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.536549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.536565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.536667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.536685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.537043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.537120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.992 [2024-11-20 10:44:13.537430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.992 [2024-11-20 10:44:13.537467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:12.992 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.537675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.537695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.537847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.537866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.538009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.538029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.538291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.538309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.538409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.538425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.538602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.538619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.538831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.538866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.539136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.539179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.539346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.539364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.539600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.539617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.539791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.539809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.540032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.540051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.540227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.540244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.540393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.540410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.540594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.540610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.540863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.540881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.541029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.541048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.541334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.541351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.541626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.541643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.541783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.541801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.542002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.542022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.542180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.542345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.542365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.542506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.542523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.542710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.542728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.542906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.543086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.543104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.543349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.543368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.543453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.543467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.543627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.543772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.543790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.543937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.993 [2024-11-20 10:44:13.543962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.993 qpair failed and we were unable to recover it. 00:27:12.993 [2024-11-20 10:44:13.544118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.544135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.544295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.544312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.544564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.544582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.544772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.544791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.544936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.544961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.545115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.545133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.545350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.545368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.545605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.545841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.545858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.545967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.545989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.546216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.546233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.546423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.546440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.546588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.546604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.546846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.546865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.547039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.547058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.547273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.547291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.547456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.547473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.547641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.547657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.547741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.547756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.547961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.547979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.548147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.548165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.548307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.548325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.548582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.548601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.548839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.548855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.548942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.548965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.549126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.549357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.549375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.549468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.549483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.549634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.549651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.549889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.549905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.550151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.550170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.550329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.550347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.550580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.550597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.550739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.550756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.550993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.551015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.551279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.551297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.551502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.551519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.551694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.994 [2024-11-20 10:44:13.551711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.994 qpair failed and we were unable to recover it. 00:27:12.994 [2024-11-20 10:44:13.551811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.551827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.552058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.552076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.552173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.552189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.552368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.552384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.552530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.552549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.552785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.552803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.552987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.553005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.553244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.553261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.553448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.553466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.553723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.553740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.553910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.553927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.554108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.554127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.554236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.554254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.554361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.554376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.554535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.554552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.554791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.554808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.554963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.554982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.555165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.555182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.555421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.555439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.555615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.555632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.555847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.555864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.556016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.556036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.556184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.556202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.556414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.556435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.556595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.556614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.556824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.556918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.556933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.557139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.557158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.557316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.557334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.557497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.557514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.557703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.557719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.557884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.557900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.558148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.558166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.558411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.558431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.558597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.558613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.558879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.558898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.559135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.559153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.559403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.995 [2024-11-20 10:44:13.559422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.995 qpair failed and we were unable to recover it. 00:27:12.995 [2024-11-20 10:44:13.559606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.559865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.560044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.560063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.560282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.560301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.560461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.560479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.560640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.560656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.560902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.560919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.561022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.561039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.561218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.561236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.561451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.561468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.561627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.561645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.561812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.561829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.561973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.561990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.562105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.562124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.562345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.562364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.562471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.562489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.562707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.562724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.562872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.562891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.563071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.563089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.563266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.563282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.563426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.563445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.563659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.563677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.563824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.563842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.563943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.563974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.564200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.564217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.564434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.564452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.564600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.564617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.564792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.564827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.565108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.565143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.565343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.565377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.565620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.565656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.565930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.565979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.566273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.566307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.566513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.566548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.566774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.566808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.567010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.567047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.567162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.567390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.567423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.996 qpair failed and we were unable to recover it. 00:27:12.996 [2024-11-20 10:44:13.567702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.996 [2024-11-20 10:44:13.567736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.567860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.567900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.568085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.568104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.568273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.568311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.568588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.568623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.568816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.568849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.569040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.569076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.569253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.569271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.569428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.569472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.569633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.569895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.569931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.570203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.570222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.570387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.570423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.570637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.570673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.570959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.570995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.571177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.571217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.571419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.571453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.571678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.571713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.571965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.571988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.572178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.572212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.572438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.572470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.572661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.572694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.572960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.572986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.573228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.573245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.573416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.573434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.573600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.573633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.573797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.573979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.573998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.574147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.574181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.574415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.574449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.574659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.574692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.574878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.574912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.575149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.575188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.575445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.575479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.575696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.997 [2024-11-20 10:44:13.575730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.997 qpair failed and we were unable to recover it. 00:27:12.997 [2024-11-20 10:44:13.575982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.576000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.576103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.576122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.576345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.576362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.576534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.576550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.576737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.576770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.576987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.577031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.577257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.577290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.577475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.577518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.577840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.577857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.578034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.578052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.578205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.578222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.578483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.578675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.578709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.578888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.578920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.579065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.579097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.579331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.579367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.579640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.579674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.579883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.579918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.580058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.580092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.580358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.580396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.580681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.580717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.580938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.580965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.581131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.581149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.581294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.581334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.581615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.581649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.581913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.581946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.582111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.582144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.582399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.582432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.582585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.582622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.582879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.582913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.583133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.583168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.583442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.583477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.583672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.583705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.583917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.583963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.584191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.584233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.584483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.584517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.584643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.584676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.584974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.584995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.998 qpair failed and we were unable to recover it. 00:27:12.998 [2024-11-20 10:44:13.585214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.998 [2024-11-20 10:44:13.585232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.585404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.585422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.585502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.585518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.585701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.585718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.585946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.585977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.586128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.586160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.586354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.586389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.586590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.586623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.586837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.586871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.587067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.587085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.587310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.587346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.587542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.587578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.587778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.587810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.588061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.588079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.588314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.588349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.588610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.588644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.588786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.588978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.589000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.589259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.589277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.589453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.589470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.589713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.589745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.589912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.589959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.590169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.590186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.590342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.590362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.590575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.590592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.590774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.590791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.590943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.590993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.591215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.591248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.591522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.591557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.591840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.592004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.592225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.592258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.592512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.592546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.592800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.592833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.593137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.593160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.593433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.593467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.593724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.593758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.594015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.999 [2024-11-20 10:44:13.594036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:12.999 qpair failed and we were unable to recover it. 00:27:12.999 [2024-11-20 10:44:13.594197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.594215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.594438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.594455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.594701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.594718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.594987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.595023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.595211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.595245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.595375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.595611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.595644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.595859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.595894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.596185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.596221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.596352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.596369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.596556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.596590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.596715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.596747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.596932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.596989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.597121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.597155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.597367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.597550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.597585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.597713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.597746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.597944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.597992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.598191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.598210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.598483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.598691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.598726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.598958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.599216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.599234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.599389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.599422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.599649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.599681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.599972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.600009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.600286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.600327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.600514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.600546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.600807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.600842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.601098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.601118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.601338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.601356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.601452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.601469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.601733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.601767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.602083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.602101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.602358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.602377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.602524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.602542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.602761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.602778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.602963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.000 [2024-11-20 10:44:13.602981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.000 qpair failed and we were unable to recover it. 00:27:13.000 [2024-11-20 10:44:13.603221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.603239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.603484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.603518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.603721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.603943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.603991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.604269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.604303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.604534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.604567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.604847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.604879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.605095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.605135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.605346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.605380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.605589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.605623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.605922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.605971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.606175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.606209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.606417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.606452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.606645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.606679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.606974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.607010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.607201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.607221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.607413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.607447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.607645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.607679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.607931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.607978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.608266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.608299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.608493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.608527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.608711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.608747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.609017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.609035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.609275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.609293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.609570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.609604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.609861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.610092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.610110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.610301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.610337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.610551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.610584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.610785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.610819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.610942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.610988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.611171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.611204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.611403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.611438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.611713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.611755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.611907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.611924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.612006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.612125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.612140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.612378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.612413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.001 [2024-11-20 10:44:13.612613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.001 [2024-11-20 10:44:13.612647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.001 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.612832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.612867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.613150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.613169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.613264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.613280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.613449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.613467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.613654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.613671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.613894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.614128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.614163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.614433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.614467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.614668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.614702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.614903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.614921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.615154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.615189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.615389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.615423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.615604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.615639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.615924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.615967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.616182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.616217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.616394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.616411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.616656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.616674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.616794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.616828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.617131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.617169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.617289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.617327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.617547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.617567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.617732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.617749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.617919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.617984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.618241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.618275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.618478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.618511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.618734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.618766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.618976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.618995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.619223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.619257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.619511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.619546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.619682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.619715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.619993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.620029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.620158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.620194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.620448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.620465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.620642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.002 [2024-11-20 10:44:13.620660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.002 qpair failed and we were unable to recover it. 00:27:13.002 [2024-11-20 10:44:13.620826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.620860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.621112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.621149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.621447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.621481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.621766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.621800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.622073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.622091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.622357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.622375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.622594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.622612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.622762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.622779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.623041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.623258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.623275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.623537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.623558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.623828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.623847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.623938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.623960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.624197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.624215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.624466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.624501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.624750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.624784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.625113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.625133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.625376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.625394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.625616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.625635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.625848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.625867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.626084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.626102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.626267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.626286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.626461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.626479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.626585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.626603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.626856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.626873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.627049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.627084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.627367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.627400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.627586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.627623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.627878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.627910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.628029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.628046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.628221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.628238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.628453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.628471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.628574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.628615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.628866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.628899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.629151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.629173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.629419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.629683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.629702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.629812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.629835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.003 qpair failed and we were unable to recover it. 00:27:13.003 [2024-11-20 10:44:13.629964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.003 [2024-11-20 10:44:13.629983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.630132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.630150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.630322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.630355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.630632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.630665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.630944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.630992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.631178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.631212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.631513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.631531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.631775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.631793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.632014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.632034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.632194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.632211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.632387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.632420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.632545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.632580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.632778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.632813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.633061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.633318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.633354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.633550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.633585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.633779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.633813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.633926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.633990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.634229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.634263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.634541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.634575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.634787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.634821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.635022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.635056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.635247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.635283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.635576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.635613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.635808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.635987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.636006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.636192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.636238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.636532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.636568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.636842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.636877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.637022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.637061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.637317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.637487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.637519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.637773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.637805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.638029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.638065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.638212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.638245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.638517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.638535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.638688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.638705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.638923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.004 [2024-11-20 10:44:13.638940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.004 qpair failed and we were unable to recover it. 00:27:13.004 [2024-11-20 10:44:13.639183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.639219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.639412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.639445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.639711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.639744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.639938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.639987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.640118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.640152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.640422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.640659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.640676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.640896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.640914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.641086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.641106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.641293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.641312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.641463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.641480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.641714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.641731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.641881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.641898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.642140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.642160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.642343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.642376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.642632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.642667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.642875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.642893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.643130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.643288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.643305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.643392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.643408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.643620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.643639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.643730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.643744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.643940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.643984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.644166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.644183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.644425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.644459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.644593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.644627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.644936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.644967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.645214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.645234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.005 [2024-11-20 10:44:13.645383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.005 [2024-11-20 10:44:13.645401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.005 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.645621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.645644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.645891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.645909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.646018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.646035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.646208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.646225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.646443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.646461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.646742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.646761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.646923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.646941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.647052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.647067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.647215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.647231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.647473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.287 [2024-11-20 10:44:13.647490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.287 qpair failed and we were unable to recover it. 00:27:13.287 [2024-11-20 10:44:13.647786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.647804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.648048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.648066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.648215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.648234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.648447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.648465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.648638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.648655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.648836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.648854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.649028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.649049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.649317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.649334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.649505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.649524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.649805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.649824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.649996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.650017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.650263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.650281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.650520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.650644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.650659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.650806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.650825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.651043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.651061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.651275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.651293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.651622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.651642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.651905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.651922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.652103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.652122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.652338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.652357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.652503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.652521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.652736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.652754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.652975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.652995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.653284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.653320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.653465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.653500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.653753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.653786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.654081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.654100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.654340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.654359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.654599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.654617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.654775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.654792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.654894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.654909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.655067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.655084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.655267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.288 [2024-11-20 10:44:13.655301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.288 qpair failed and we were unable to recover it. 00:27:13.288 [2024-11-20 10:44:13.655438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.655657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.655692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.655973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.656008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.656379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.656401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.656565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.656584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.656777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.656794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.657044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.657082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.657295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.657329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.657587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.657620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.658719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.658764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.659038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.659060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.659299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.659316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.659581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.659599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.659743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.659758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.659932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.659955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.660113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.660131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.660277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.660292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.660523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.660540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.660804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.660821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.660996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.661017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.661185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.661202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.662087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.662121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.662395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.662411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.662650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.662935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.662962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.663178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.663196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.663312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.663330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.663439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.663453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.663610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.663625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.663809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.663825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.663979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.664000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.289 qpair failed and we were unable to recover it. 00:27:13.289 [2024-11-20 10:44:13.664180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.289 [2024-11-20 10:44:13.664197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.664360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.664375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.664539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.664554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.664772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.664790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.664968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.664989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.665074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.665089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.665348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.665380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.665641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.665676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.665868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.665900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.666135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.666170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.666353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.666368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.666607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.666640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.666833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.666867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.667030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.667227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.667245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.667462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.667477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.667714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.667730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.667960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.667977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.668197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.668213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.668367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.668383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.668619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.668696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.668937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.668995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.669201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.669236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.669505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.669546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.669762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.669795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.670094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.670128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.670396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.670412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.670580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.670596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.670852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.670883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.671147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.671164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.671312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.671330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.671511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.671525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.671689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.671705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.671878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.671896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.672063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.290 [2024-11-20 10:44:13.672081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.290 qpair failed and we were unable to recover it. 00:27:13.290 [2024-11-20 10:44:13.672168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.672182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.672289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.672306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.672482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.672498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.672651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.672668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.672884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.672899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.673069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.673087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.673199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.673216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.673380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.673396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.673621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.673637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.673815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.673831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.673925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.673942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.674073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.674092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.674299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.674450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.674466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.674642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.674658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.674804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.674819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.675087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.675104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.675233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.675380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.675398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.675577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.675593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.675680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.675693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.675861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.676100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.676118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.676390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.676423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.676682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.676902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.676935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.677189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.677226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.677525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.677541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.677762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.677779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.677994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.678010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.678118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.678301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.678317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.678392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.678406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.291 [2024-11-20 10:44:13.678559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.291 [2024-11-20 10:44:13.678574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.291 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.678683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.678697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.678912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.678928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.679155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.679172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.679265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.679279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.679370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.679385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.679597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.679634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.679891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.679926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.680221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.680237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.680510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.680526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.680718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.680735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.680960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.680983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.681158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.681173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.681351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.681367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.681603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.681620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.681797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.681813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.681984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.682001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.682159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.682175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.682339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.682354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.682571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.682587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.682750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.682766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.682889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.682906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.683070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.683188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.683315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.683403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.683523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.292 [2024-11-20 10:44:13.683675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.292 qpair failed and we were unable to recover it. 00:27:13.292 [2024-11-20 10:44:13.683821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.683835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.683997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.684916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.684931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.685919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.685934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.686919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.686933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.293 qpair failed and we were unable to recover it. 00:27:13.293 [2024-11-20 10:44:13.687887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.293 [2024-11-20 10:44:13.687901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.687986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.688885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.688982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.689972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.689987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.690134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.690150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.690311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.690325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.690489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.690505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.690666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.690682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.690842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.690858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.690943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.690967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.691866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.691881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.692094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.692110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.294 [2024-11-20 10:44:13.692328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.294 [2024-11-20 10:44:13.692346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.294 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.692444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.692459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.692619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.692637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.692857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.692873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.692980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.692996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.693899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.693993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.694157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.694331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.694442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.694528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.694692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.694852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.695931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.695955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.696104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.696119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.696307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.696324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.696410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.696424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.696565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.696581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.696663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.696678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.295 qpair failed and we were unable to recover it. 00:27:13.295 [2024-11-20 10:44:13.696778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.295 [2024-11-20 10:44:13.696792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.696878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.696895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.697966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.697982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.698919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.698932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.699188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.699203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.699296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.699312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.699456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.699470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.699624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.699643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.699790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.699805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.699902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.699917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.700921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.296 [2024-11-20 10:44:13.700935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.296 qpair failed and we were unable to recover it. 00:27:13.296 [2024-11-20 10:44:13.701038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.701155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.701279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.701436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.701615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.701785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.701944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.701985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.702093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.702108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.702362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.702379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.702471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.702487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.702668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.702685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.702850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.702866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.703045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.703063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.703221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.703237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.703388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.703404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.703550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.703565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.703740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.703756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.703899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.703914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.704863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.704879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.705035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.705054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.705224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.705239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.705345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.705358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.705527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.705543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.705688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.705704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.705847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.705864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.297 qpair failed and we were unable to recover it. 00:27:13.297 [2024-11-20 10:44:13.706043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.297 [2024-11-20 10:44:13.706059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.706158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.706172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.706324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.706340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.706429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.706443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.706562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.706719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.706735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.706918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.706935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.707954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.707969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.708160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.708176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.708318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.708335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.708490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.708506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.708649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.708664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.708844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.708859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.709983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.709999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.710139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.710153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.710239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.710254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.710322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.710335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.298 [2024-11-20 10:44:13.710508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.298 [2024-11-20 10:44:13.710525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.298 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.710673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.710687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.710827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.710841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.710926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.710940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.711971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.711986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.712909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.712922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.713090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.713106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.713303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.713318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.713471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.713486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.713571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.713585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.713744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.713758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.713912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.713928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.714046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.714062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.714135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.714150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.714339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.714355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.714547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.714562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.714716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.714731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.714963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.714980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.715080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.715231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.715248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.715348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.715363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.715431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.715446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.715672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.715705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.715845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.299 [2024-11-20 10:44:13.715862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.299 qpair failed and we were unable to recover it. 00:27:13.299 [2024-11-20 10:44:13.716034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.716049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.716218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.716232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.716443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.716458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.716696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.716710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.716855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.716872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.716964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.716983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.717141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.717157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.717263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.717279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.717535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.717550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.717706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.717722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.717935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.717958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.718957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.718973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.719971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.719986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.720975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.720990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.721155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.721171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.721385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.300 [2024-11-20 10:44:13.721402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.300 qpair failed and we were unable to recover it. 00:27:13.300 [2024-11-20 10:44:13.721545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.721560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.721656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.721670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.721811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.721827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.721919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.721934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.722895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.722909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.723215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.723232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.723320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.723335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.723557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.723588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.723719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.723965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.724000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.724274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.724305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.724579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.724611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.724852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.724885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.725036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.725074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.725330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.725361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.725624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.725638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.725868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.725884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.726021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.726037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.726190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.726205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.726419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.726435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.726689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.726705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.726913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.726930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.727118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.727135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.727380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.727412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.727534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.727567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.301 [2024-11-20 10:44:13.727770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.301 [2024-11-20 10:44:13.727802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.301 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.727992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.728027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.728213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.728231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.728465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.728481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.728714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.728731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.728978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.729144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.729160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.729320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.729335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.729493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.729509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.729666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.729682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.729837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.729851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.730009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.730024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.730280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.730313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.730422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.730454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.730699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.730731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.730933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.730993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.731321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.731354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.731642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.731658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.731922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.731937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.732111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.732126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.732342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.732389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.732571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.732604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.732803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.732833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.733073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.733110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.733362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.733395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.733677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.733692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.733838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.733854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.734074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.734090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.734244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.734260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.734416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.734434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.734624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.734640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.734780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.734795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.735046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.302 [2024-11-20 10:44:13.735063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.302 qpair failed and we were unable to recover it. 00:27:13.302 [2024-11-20 10:44:13.735229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.735244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.735421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.735476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.735678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.735710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.736019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.736053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.736252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.736285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.736551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.736567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.736711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.736727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.736904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.737144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.737161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.737310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.737326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.737510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.737525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.737689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.737705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.737791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.737805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.737996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.738013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.738246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.738261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.738435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.738451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.738550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.738566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.738670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.738689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.738962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.738977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.739144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.739160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.739406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.739429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.739679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.739696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.739880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.739897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.740061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.740077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.740265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.740281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.740439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.740455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.740601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.740616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.740798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.740814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.740908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.740922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.741177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.741195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.303 qpair failed and we were unable to recover it. 00:27:13.303 [2024-11-20 10:44:13.741303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.303 [2024-11-20 10:44:13.741318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.741565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.741599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.741853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.741884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.742248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.742528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.742544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.742779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.742795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.742870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.742884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.743046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.743066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.743299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.743316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.743555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.743570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.743672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.743686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.743935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.743957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.744182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.744199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.744434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.744449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.744727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.744744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.744997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.745016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.745204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.745221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.745389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.745406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.745611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.745833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.745848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.746193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.746211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.746398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.746416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.746633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.746649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.746887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.746902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.747074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.747091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.747329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.747345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.747519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.747534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.747639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.747655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.747807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.747823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.747978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.747996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.748086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.748100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.304 [2024-11-20 10:44:13.748623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.304 [2024-11-20 10:44:13.748638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.304 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.748858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.748874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.749081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.749102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.749256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.749272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.749441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.749457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.749659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.749676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.749859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.749874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.750043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.750061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.750173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.750197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.750372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.750387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.750485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.750501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.750711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.750728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.750887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.750902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.751149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.751166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.751261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.751275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.751447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.751463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.751705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.751722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.751922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.751939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.752126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.752142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.752235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.752250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.752417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.752433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.752665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.752681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.752884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.752898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.753072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.753091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.753374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.753406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.753600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.753632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.753821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.753852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.753981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.754017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.754248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.754280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.754555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.754595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.754871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.754887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.754998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.755216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.755232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.755410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.755428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.755666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.305 [2024-11-20 10:44:13.755683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.305 qpair failed and we were unable to recover it. 00:27:13.305 [2024-11-20 10:44:13.755897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.755914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.756199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.756217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.756316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.756330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.756501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.756519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.756689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.756704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.756867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.756884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.757037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.757055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.757160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.757175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.757414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.757430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.757589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.757604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.757776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.757793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.758047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.758065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.758283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.758299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.758451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.758467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.758631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.758647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.758809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.758824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.759078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.759095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.759368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.759384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.759636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.759653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.759761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.759777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.759962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.759980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.760195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.760214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.760420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.760436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.760727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.760743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.760931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.760955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.761111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.761127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.761304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.761320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.306 [2024-11-20 10:44:13.761503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.306 [2024-11-20 10:44:13.761519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.306 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.761690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.761706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.761965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.761985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.762186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.762203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.762429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.762462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.762741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.762775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.763026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.763061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.763264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.763298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.763569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.763603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.763747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.763762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.763920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.763936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.764124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.764139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.764304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.764320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.764493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.764509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.764801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.764835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.765118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.765152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.765428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.765444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.765708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.765724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.765881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.765896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.766134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.766172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.766387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.766420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.766680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.766713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.766977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.767012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.767316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.767359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.767600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.767616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.767856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.767873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.768023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.768041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.768262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.768277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.768548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.307 [2024-11-20 10:44:13.768563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.307 qpair failed and we were unable to recover it. 00:27:13.307 [2024-11-20 10:44:13.768731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.768746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.768909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.768926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.769181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.769197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.769366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.769383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.769541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.769556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.769815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.769831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.770050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.770067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.770297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.770313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.770535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.770552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.770798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.770831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.771087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.771122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.771263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.771302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.771545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.771560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.771746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.771761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.771979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.771997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.772112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.772127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.772277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.772292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.772463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.772478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.772643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.772660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.772821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.772836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.773094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.773112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.773294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.773310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.773482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.773497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.773726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.773742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.773827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.773842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.774082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.774101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.774324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.774340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.774427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.774441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.774601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.774616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.774837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.774853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.775016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.775033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.775226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.775242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.308 qpair failed and we were unable to recover it. 00:27:13.308 [2024-11-20 10:44:13.775488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.308 [2024-11-20 10:44:13.775504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.775665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.775685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.775841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.775857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.775979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.775994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.776157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.776175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.776339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.776354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.776537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.776553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.776719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.776846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.776863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.777106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.777122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.777290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.777306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.777521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.777537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.777758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.777791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.778062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.778100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.778289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.778323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.778591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.778624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.778821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.778837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.778984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.779002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.779159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.779176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.779358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.779373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.779667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.779699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.779817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.779851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.780057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.780091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.780273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.780306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.780575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.780592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.780841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.780857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.781097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.781144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.781348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.781382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.781580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.781626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.781823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.309 [2024-11-20 10:44:13.781840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.309 qpair failed and we were unable to recover it. 00:27:13.309 [2024-11-20 10:44:13.782021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.782039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.782285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.782301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.782467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.782482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.782655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.782673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.782867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.782885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.783042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.783058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.783229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.783245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.783493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.783508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.783740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.783755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.783845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.783859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.784076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.784093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.784274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.784290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.784518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.784685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.784702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.784873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.784889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.785041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.785057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.785165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.785189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.785410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.785426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.785602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.785619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.785853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.785869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.786017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.786035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.786204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.786220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.786416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.786433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.786651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.786666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.786828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.786843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.787073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.787090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.787209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.787224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.787471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.787506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.787708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.787740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.787943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.787991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.310 [2024-11-20 10:44:13.788250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.310 [2024-11-20 10:44:13.788283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.310 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.788468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.788500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.788677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.788692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.788839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.788857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.789021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.789038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.789196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.789212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.789363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.789379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.789522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.789537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.789683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.789698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.789864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.789880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.790050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.790068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.790258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.790274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.790432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.790449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.790546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.790561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.790707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.790723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.790893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.790908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.791026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.791041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.791158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.791176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.791416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.791433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.791512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.791525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.791762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.791780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.792016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.792035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.792280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.792297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.792444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.792460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.792672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.792688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.792858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.792873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.793092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.793108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.793350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.793366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.793591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.793608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.793707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.793721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.793903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.311 [2024-11-20 10:44:13.793919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.311 qpair failed and we were unable to recover it. 00:27:13.311 [2024-11-20 10:44:13.794171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.794189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.794279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.794295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.794508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.794525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.794679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.794697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.794916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.794932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.795097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.795117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.795362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.795378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.795530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.795546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.795761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.795792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.795924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.795971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.796254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.796286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.796557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.796588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.796787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.796820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.797072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.797106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.797355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.797387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.797590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.797606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.797752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.797767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.797981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.797998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.798214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.798231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.798388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.798403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.798631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.798647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.798807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.798822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.799010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.799027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.799178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.799218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.799418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.799450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.799675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.799709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.799959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.799975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.800150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.800166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.800331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.800346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.800505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.800520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.800608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.800621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.800793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.312 [2024-11-20 10:44:13.800807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.312 qpair failed and we were unable to recover it. 00:27:13.312 [2024-11-20 10:44:13.801046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.801067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.801165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.801180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.801277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.801291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.801455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.801470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.801687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.801703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.801777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.801791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.802006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.802023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.802173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.802191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.802417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.802462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.802765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.802797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.803008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.803044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.803244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.803275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.803461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.803477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.803642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.803658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.803754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.803768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.803986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.804003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.804261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.804279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.804525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.804540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.804688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.804703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.804877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.804893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.805106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.805122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.805288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.805305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.805553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.805584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.805838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.805870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.806131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.806168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.806456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.806490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.806677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.806692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.806918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.806982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.807276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.807308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.807574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.807609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.807846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.313 [2024-11-20 10:44:13.808126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.313 [2024-11-20 10:44:13.808162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.313 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.808375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.808407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.808707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.808740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.809022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.809039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.809197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.809211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.809397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.809660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.809675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.809965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.809986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.810150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.810166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.810341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.810356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.810601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.810632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.810831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.810862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.811049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.811082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.811224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.811268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.811550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.811565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.811734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.811749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.811911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.811969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.812223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.812255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.812520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.812552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.812800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.812831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.813109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.813142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.813347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.813379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.813649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.813665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.813834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.813850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.814072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.814089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.814272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.814287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.814535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.814567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.814823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.814855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.815036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.815069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.815290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.314 [2024-11-20 10:44:13.815321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.314 qpair failed and we were unable to recover it. 00:27:13.314 [2024-11-20 10:44:13.815598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.815631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.815874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.815905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.816117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.816150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.816357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.816372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.816547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.816579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.816861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.816892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.817087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.817122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.817428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.817460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.817732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.817764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.818045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.818081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.818337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.818370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.818590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.818604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.818787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.818802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.819007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.819040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.819260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.819293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.819536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.819781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.819796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.820011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.820028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.820198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.820213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.820426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.820441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.820631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.820647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.820895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.820911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.821162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.821178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.821419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.821434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.821674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.821689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.821901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.821916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.822081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.822099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.822347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.822380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.822529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.822561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.822760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.822791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.823076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.315 [2024-11-20 10:44:13.823111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.315 qpair failed and we were unable to recover it. 00:27:13.315 [2024-11-20 10:44:13.823414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.823446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.823647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.823680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.823878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.823910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.824193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.824233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.824429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.824462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.824683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.824715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.824980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.824996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.825161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.825176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.825426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.825459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.825677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.825709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.825996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.826032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.826309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.826342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.826538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.826570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.826763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.826778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.827046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.827063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.827301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.827316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.827469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.827484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.827750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.827783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.828035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.828068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.828256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.828288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.828546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.828578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.828787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.828801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.829042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.829076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.829329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.829361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.829643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.829674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.829853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.829867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.830112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.830129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.830286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.830329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.830610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.830643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.830844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.830876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.831153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.316 [2024-11-20 10:44:13.831193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.316 qpair failed and we were unable to recover it. 00:27:13.316 [2024-11-20 10:44:13.831401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.831432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.831633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.831665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.831872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.831887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.832061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.832077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.832261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.832291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.832446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.832461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.832720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.832752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.832936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.832978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.833251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.833283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.833544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.833560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.833720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.833735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.833892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.833908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.834151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.834168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.834407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.834423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.834644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.834660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.834839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.834855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.835031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.835047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.835194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.835210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.835422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.835437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.835680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.835716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.835994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.836028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.836308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.836339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.836559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.836591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.836856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.836871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.837059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.837075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.837223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.837239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.837417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.837432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.837599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.837615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.837878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.837893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.837981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.837997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.838183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.838198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.838381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.838396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.317 qpair failed and we were unable to recover it. 00:27:13.317 [2024-11-20 10:44:13.838579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.317 [2024-11-20 10:44:13.838611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.838810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.838841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.839052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.839086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.839357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.839389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.839642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.839657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.839804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.839819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.840061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.840077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.840342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.840357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.840601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.840617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.840774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.840790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.841036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.841069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.841268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.841302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.841550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.841582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.841708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.841723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.841912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.841928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.842133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.842349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.842364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.842530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.842544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.842718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.842733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.842975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.842991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.843239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.843255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.843411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.843426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.843594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.843632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.843834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.843867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.844052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.844085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.844368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.844399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.844677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.844692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.844938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.844959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.845201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.845216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.845373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.845387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.845562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.845577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.845830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.845861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.846087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.846125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.846404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.846437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.318 [2024-11-20 10:44:13.846710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.318 [2024-11-20 10:44:13.846725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.318 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.846968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.846989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.847224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.847239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.847455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.847470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.847737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.847753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.847929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.847943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.848040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.848055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.848217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.848232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.848396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.848411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.848558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.848573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.848738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.848753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.848916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.848932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.849125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.849141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.849246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.849263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.849528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.849543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.849704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.849719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.849975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.849992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.850109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.850126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.850362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.850378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.850536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.850550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.850770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.850785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.851027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.851044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.851206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.851221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.851377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.851392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.851654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.851669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.851919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.851934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.852082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.852098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.852250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.852265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.852507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.852527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.852765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.852780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.853025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.853042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.853233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.319 [2024-11-20 10:44:13.853249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.319 qpair failed and we were unable to recover it. 00:27:13.319 [2024-11-20 10:44:13.853529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.853544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.853810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.853826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.854075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.854093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.854180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.854194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.854432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.854447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.854622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.854638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.854891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.854906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.855122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.855137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.855316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.855332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.855502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.855517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.855708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.855723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.855874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.855889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.856055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.856071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.856254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.856285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.856563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.856595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.856788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.856819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.857083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.857116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.857316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.857348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.857545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.857578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.857828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.858069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.858105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.858383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.858416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.858688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.858703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.858957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.858977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.859142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.859158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.859398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.859435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.320 [2024-11-20 10:44:13.859588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.320 [2024-11-20 10:44:13.859603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.320 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.859856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.859888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.860033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.860066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.860366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.860398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.860686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.860717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.860892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.860924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.861216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.861248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.861531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.861563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.861747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.861761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.861874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.861891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.862149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.862165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.862381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.862397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.862669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.862701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.863004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.863037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.863332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.863364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.863631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.863645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.863894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.863909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.864148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.864164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.864407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.864422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.864575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.864590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.864825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.864840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.865080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.865096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.865210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.865227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.865388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.865402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.865645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.865660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.865911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.865926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.866184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.866201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.866359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.866375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.866597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.866629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.866903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.866934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.867181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.867215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.867417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.867448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.867684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.867717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.867969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.867985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.321 [2024-11-20 10:44:13.868251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.321 [2024-11-20 10:44:13.868267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.321 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.868413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.868429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.868685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.868717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.869024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.869058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.869318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.869350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.869552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.869567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.869809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.869824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.870066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.870082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.870328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.870344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.870582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.870598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.870780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.870795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.871037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.871055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.871297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.871330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.871608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.871639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.871930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.871990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.872269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.872300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.872558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.872589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.872844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.872859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.873093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.873109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.873250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.873265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.873450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.873465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.873621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.873636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.873899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.873914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.874162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.874179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.874342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.874357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.874442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.874455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.874600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.874615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.874772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.874787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.875051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.875068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.875288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.875303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.875544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.875559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.875828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.875871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.322 [2024-11-20 10:44:13.876168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.322 [2024-11-20 10:44:13.876203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.322 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.876416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.876447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.876723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.876754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.877009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.877026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.877239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.877254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.877430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.877461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.877776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.877809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.878086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.878119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.878370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.878410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.878651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.878666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.878884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.878899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.878996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.879011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.879181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.879197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.879306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.879321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.879540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.879555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.879654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.879668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.879919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.879935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.880422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.880439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.880709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.880724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.880961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.880978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.881121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.881137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.881385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.881417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.881673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.881689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.881902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.881917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.882171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.882187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.882349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.882368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.882513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.882528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.882630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.882650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.882885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.882899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.883045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.883062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.883279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.883294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.883541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.883573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.323 qpair failed and we were unable to recover it. 00:27:13.323 [2024-11-20 10:44:13.883752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.323 [2024-11-20 10:44:13.883783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.883984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.884207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.884240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.884443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.884475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.884675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.884706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.884984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.885001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.885191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.885206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.885310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.885324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.885543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.885558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.885772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.885787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.885964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.885980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.886175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.886191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.886295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.886310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.886469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.886484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.886716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.886731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.886917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.886973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.887161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.887193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.887477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.887508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.887707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.887722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.887899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.887931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.888149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.888183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.888467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.888500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.888776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.889008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.889026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.889146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.889162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.889305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.889320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.889467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.889482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.889646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.889662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.889847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.889863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.890011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.890027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.890137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.890153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.890245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.890258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.890420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.324 [2024-11-20 10:44:13.890435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.324 qpair failed and we were unable to recover it. 00:27:13.324 [2024-11-20 10:44:13.890532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.890546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.890768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.890783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.891026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.891042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.891305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.891337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.891551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.891582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.891862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.891895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.892152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.892186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.892298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.892329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.892583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.892614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.892821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.892853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.893130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.893165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.893374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.893407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.893587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.893620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.893839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.893871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.894123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.894157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.894422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.894454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.894747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.894762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.894930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.894946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.895156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.895189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.895392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.895424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.895680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.895711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.895974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.895989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.896224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.896239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.896338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.896353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.896603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.896618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.896856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.896871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.325 [2024-11-20 10:44:13.897038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.325 [2024-11-20 10:44:13.897055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.325 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.897274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.897290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.897434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.897453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.897628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.897643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.897889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.897903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.898097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.898114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.898305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.898337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.898631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.898663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.898883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.898915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.899071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.899104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.899326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.899358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.899570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.899602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.899900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.899915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.900148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.900165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.900393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.900408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.900667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.900682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.900924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.900939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.901152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.901169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.901433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.901448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.901692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.901707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.901923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.901938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.902207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.902223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.902416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.902430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.902651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.902683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.902931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.902974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.903162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.903195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.903411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.903443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.903711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.903744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.904041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.904075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.904297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.904335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.904614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.904646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.904901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.904917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.905074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.326 [2024-11-20 10:44:13.905090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.326 qpair failed and we were unable to recover it. 00:27:13.326 [2024-11-20 10:44:13.905337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.905369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.905600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.905631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.905817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.905848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.906082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.906098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.906319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.906334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.906575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.906591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.906846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.906861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.907105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.907122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.907264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.907279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.907491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.907506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.907727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.907742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.907962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.907978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.908134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.908149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.908373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.908389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.908651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.908666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.908915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.908961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.909248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.909284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.909503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.909536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.909809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.909840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.910041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.910077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.910353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.910385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.910563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.910595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.910728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.910762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.911030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.911269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.911284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.911451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.911467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.911706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.911722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.911958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.912193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.912209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.912316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.912330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.912500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.912515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.912674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.912689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.912935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.912959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.913214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.913247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.913455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.913487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.913773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.913789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.327 [2024-11-20 10:44:13.914037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.327 [2024-11-20 10:44:13.914053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.327 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.914278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.914294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.914534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.914550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.914812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.914828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.915055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.915311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.915326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.915585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.915601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.915828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.915844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.916084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.916101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.916332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.916365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.916580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.916614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.916796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.916829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.917011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.917030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.917208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.917224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.917393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.917409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.917648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.917680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.917940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.918004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.918226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.918242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.918472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.918488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.918706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.918722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.918897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.918913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.919156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.919192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.919398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.919430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.919631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.919662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.919914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.919930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.920159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.920177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.920359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.920391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.920526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.920560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.920703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.920737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.920996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.921033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.921325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.921340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.921530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.921546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.921693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.921708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.921857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.921873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.922050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.922068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.328 [2024-11-20 10:44:13.922308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.328 [2024-11-20 10:44:13.922324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.328 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.922434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.922450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.922676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.922692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.922911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.922926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.923039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.923054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.923275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.923290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.923385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.923398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.923491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.923506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.923771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.923787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.923937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.923959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.924123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.924139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.924305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.924321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.924566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.924582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.924835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.924851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.925080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.925099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.925316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.925332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.925505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.925520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.925623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.925638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.925840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.925857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.926095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.926111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.926310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.926330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.926559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.926575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.926720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.926839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.927094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.927111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.927330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.927346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.927495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.927512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.927839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.927854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.928072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.928088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.928280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.928297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.928488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.928715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.928749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.929007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.929046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.929334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.929369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.929560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.929592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.929801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.929834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.929968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.929984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.930237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.930252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.329 [2024-11-20 10:44:13.930424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.329 [2024-11-20 10:44:13.930440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.329 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.930682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.930716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.930850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.930883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.931030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.931065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.931370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.931404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.931627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.931660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.931915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.931958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.932218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.932234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.932471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.932490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.932755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.932773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.932927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.932943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.933138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.933154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.933308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.933324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.933562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.933579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.933829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.933845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.933969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.933987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.934064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.934080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.934230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.934244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.934453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.934473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.934626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.934642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.934863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.934878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.935040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.935056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.935304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.935320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.935469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.935485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.935723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.935741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.935906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.935922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.936098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.936114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.936279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.936295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.936464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.936715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.936732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.936975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.937021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.937284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.937316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.937427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.937460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.330 [2024-11-20 10:44:13.937712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.330 [2024-11-20 10:44:13.937746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.330 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.938004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.938038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.938319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.938335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.938588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.938606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.938770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.938785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.938883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.938898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.939065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.939081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.939194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.939212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.939426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.939440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.939615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.939631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.939814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.939830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.940062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.940078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.940304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.940319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.940497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.940512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.940743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.940775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.940917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.940965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.941335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.941406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.941705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.941743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.941963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.942002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.942280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.942299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.942568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.942584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.942784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.942799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.942964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.942980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.943168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.943184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.943415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.943617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.943634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.943802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.943818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.944070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.944086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.944262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.944277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.944438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.944475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.944714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.944747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.945018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.945036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.945252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.945268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.945436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.945452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.945622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.945638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.945741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.945755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.945856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.331 [2024-11-20 10:44:13.945871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.331 qpair failed and we were unable to recover it. 00:27:13.331 [2024-11-20 10:44:13.946038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.946054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.946318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.946333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.946497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.946534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.946767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.946800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.946983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.947017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.947216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.947232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.947476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.947496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.947735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.947750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.947897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.947913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.948165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.948181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.948425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.948440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.948622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.948637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.948808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.948824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.949059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.949078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.949316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.949332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.949520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.949537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.949744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.949776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.950031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.950066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.950283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.950318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.950592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.950626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.950826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.950843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.950961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.950977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.951210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.951226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.951459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.951475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.951734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.951750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.951919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.951935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.952232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.952249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.952485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.952502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.952667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.952683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.952864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.952879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.953067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.953332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.953348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.953514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.953530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.953771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.953810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.954097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.954131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.954427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.954444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.954604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.954621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-11-20 10:44:13.954840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-11-20 10:44:13.954856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.955025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.955041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.955198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.955214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.955358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.955373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.955526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.955543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.955788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.955804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.956034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.956052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.956272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.956289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.956466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.956481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.956701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.956717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.956825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.956839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.957008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.957025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.957246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.957261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.957497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.957529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.957732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.957763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.958046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.958080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.958359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.958376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.958594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.958609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.958853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.958869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.959032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.959048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.959260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.959276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.959453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.959468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.959631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.959647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.959830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.959849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.960043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.960059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.960274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.960290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.960461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.960478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.960640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.960655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.960820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.960835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.961079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.961122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.961266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.961300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.961519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.961552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.961813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.961847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.962150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.962185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.962444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.962476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.962766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.962798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.963070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-11-20 10:44:13.963086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-11-20 10:44:13.963249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.963264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.963439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.963483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.963820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.964077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.964095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.964240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.964255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.964413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.964428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.964575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.964591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.964835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.964851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.965024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.965044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.965295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.965311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.965578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.965593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.965862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.965879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.966032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.966048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.966218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.966235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.966416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.966432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.966584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.966600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.966763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.966778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.966930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.966945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.967195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.967213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.967429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.967445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.967667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.967684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.967853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.967870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.968113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.968131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.968238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.968252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.968352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.968366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.968516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.968532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.968606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.968621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.968844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.968860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.969150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.969168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.969387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.969404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.969578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.969600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.969725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.969739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.969908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.969923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.970099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.970115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.970256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.970269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.970514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.970527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.970671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.970686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.970876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.971147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.971163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.971382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.971399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-11-20 10:44:13.971549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-11-20 10:44:13.971563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.971754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.971778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.971971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.971989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.972168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.972189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.972312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.972331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.972507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.972530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.972655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.972678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.972845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.972868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.973149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.973174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.973414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.973432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.973599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.973615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.973863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.973879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.974152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.974171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.974589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.974612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.974773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.974789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.975042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.975059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.975162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.975176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.975283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.975298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.975454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.975471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.975699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.975715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.975964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.975982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.976131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.976147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.976337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.976352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.976425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.976440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.976671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.976686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.976838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.976854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.977002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.977019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.977203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.977435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.977452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.977547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.977561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.977725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.977740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.977892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.977909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.978177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.978194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.978440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.978459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.978621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.978638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-11-20 10:44:13.978793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-11-20 10:44:13.978809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.979077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.979098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.979260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.979275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.979439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.979454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.979716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.979748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.980965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.980982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.981077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.981093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.981330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.981346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.981547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.981562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.981750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.981765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.982040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.982058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.982151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.982165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.982384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.982399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.982603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.982621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.982789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.982804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.982962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.982982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.983146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.983161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.983331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.983349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.983500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.983515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.983746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.983762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.983937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.983964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.984196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.984211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.984380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.984397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.984566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.984581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.984828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.984860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.985120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.985156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.985360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.985376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.985546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.985562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.985805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.985822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.985985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.986001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.986184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.986219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.986454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.986487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.986599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.986630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.986831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.986864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.987128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.987146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.987401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.987418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.987528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.987544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.987769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.987784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.987976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.987994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.988144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.988161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.988400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-11-20 10:44:13.988417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-11-20 10:44:13.988585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.988600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.988743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.988758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.988921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.988936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.989202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.989220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.989481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.989496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.989650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.989665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.989878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.989894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.990053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.990072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.990317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.990334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.990548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.990565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.990806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.990822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.991938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.991963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.992851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.992998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.993015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.993162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.993181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.993339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.993353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.993510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.993527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.993627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.993643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-11-20 10:44:13.993714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-11-20 10:44:13.993728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.993874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.993890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.994885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.994988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.995097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.995215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.995319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.995487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.995726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.995856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.995874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.996929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.996944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.997048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.997068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.997209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.997224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.997460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.997477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.997698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.616 [2024-11-20 10:44:13.997715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.616 qpair failed and we were unable to recover it. 00:27:13.616 [2024-11-20 10:44:13.997869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.997884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.998879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.998893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:13.999892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:13.999905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.000077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.000093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.000172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.000187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.000299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.000461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.000478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.000627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.000641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.000861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.000877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.001046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.001064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.001307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.001347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.001627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.001659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.001776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.001807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.002061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.002078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.002249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.002265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.002453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.002470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.002553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.002566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.002652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.002669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.002843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.002859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.003110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.003126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.003277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.003292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.003613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.003646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.003920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.003964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.004146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.004163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.617 [2024-11-20 10:44:14.004339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.617 [2024-11-20 10:44:14.004354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.617 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.004584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.004616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.004800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.004834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.005119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.005156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.005340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.005358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.005570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.005585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.005752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.005768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.005936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.005961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.006186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.006383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.006398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.006642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.006675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.006852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.006884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.007124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.007159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.007357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.007372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.007636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.007651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.007893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.007911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.008081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.008097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.008324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.008358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.008613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.008648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.008833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.009063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.009081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.009250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.009267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.009483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.009499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.009663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.009679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.009849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.009865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.010106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.010123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.010216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.010231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.010483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.010499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.010694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.010710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.010978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.010997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.011241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.011257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.011478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.011494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.011726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.011742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.011893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.011910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.012012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.012028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.012187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.012202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.012426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.012442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.012598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.012615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.618 [2024-11-20 10:44:14.012887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.618 [2024-11-20 10:44:14.012919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.618 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.013066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.013103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.013372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.013388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.013573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.013589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.013781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.013797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.013988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.014086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.014101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.014258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.014275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.014520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.014535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.014689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.014706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.014875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.014891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.015051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.015067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.015227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.015242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.015390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.015406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.015567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.015582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.015745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.015762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.015848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.015867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.016087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.016104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.016368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.016384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.016545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.016560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.016835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.016851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.017001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.017020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.017213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.017365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.017380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.017526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.017542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.017689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.017705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.017870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.017887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.018057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.018074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.018294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.018326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.018642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.018676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.018864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.018895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.019160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.019193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.019403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.019437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.019655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.019687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.019876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.019909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.020132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.020167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.020375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.020391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.020570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.020617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.619 [2024-11-20 10:44:14.020760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.619 [2024-11-20 10:44:14.020793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.619 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.021077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.021115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.021297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.021315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.021463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.021479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.021576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.021590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.021806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.021824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.022005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.022023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.022261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.022276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.022421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.022436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.022604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.022620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.022699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.022713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.022808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.022823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.023944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.023987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.024257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.024291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.024488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.024520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.024698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.024731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.025010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.025049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.025240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.025255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.025522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.025555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.025764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.025798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.026063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.026096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.026240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.026255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.026438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.026453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.026609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.026624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.026851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.026866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.026958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.026973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.027217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.027232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.027452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.620 [2024-11-20 10:44:14.027467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.620 qpair failed and we were unable to recover it. 00:27:13.620 [2024-11-20 10:44:14.027557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.027570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.027679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.027694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.027780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.027793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.027892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.027908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.028016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.028034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.028207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.028221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.028361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.028374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.028461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.028640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.028654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.028803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.028818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.029085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.029103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.029273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.029288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.029574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.029606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.029797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.029829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.030017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.030051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.030353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.030368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.030535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.030550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.030724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.030740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.030993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.031028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.031277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.031309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.031567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.031599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.031777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.031809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.032093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.032126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.032393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.032409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.032636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.032652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.032890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.032905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.033096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.033114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.033307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.033323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.033471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.033486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.033702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.033873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.033888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.034050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.034066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.034175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.034191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.034410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.034423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.034672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.034687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.034778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.034791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.035019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.035035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.035298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.035313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.035556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.035570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.035794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.035816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.036065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.036081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.036323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.036338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.036528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.036543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.036727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.036742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.036899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.036935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.037240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.037275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.621 [2024-11-20 10:44:14.037431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.621 [2024-11-20 10:44:14.037463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.621 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.037737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.037769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.038028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.038259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.038562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.038593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.038889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.038920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.039122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.039155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.039345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.039360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.039539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.039570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.039781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.039812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.040089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.040123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.040297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.040312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.040481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.040497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.040652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.040689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.040944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.040997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.041204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.041237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.041491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.041522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.041703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.041734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.042013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.042029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.042244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.042259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.042438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.042457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.042701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.042734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.042879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.042911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.043196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.043229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.043424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.043440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.043677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.043708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.043901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.043931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.044147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.044181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.044374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.044612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.044642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.044908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.044941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.045182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.045216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.045526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.045558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.045833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.045865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.046153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.046188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.046414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.046429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.046663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.046679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.046898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.047173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.047190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.047351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.047366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.047614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.047645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.047867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.048055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.048089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.048362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.048377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.048558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.048573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.048815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.622 [2024-11-20 10:44:14.048830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.622 qpair failed and we were unable to recover it. 00:27:13.622 [2024-11-20 10:44:14.048995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.049012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.049202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.049221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.049432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.049447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.049591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.049606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.049770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.049785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.049933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.049955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.050168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.050184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.050272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.050286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.050504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.050520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.050679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.050694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.050876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.050908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.051139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.051173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.051429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.051461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.051729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.051761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.052032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.052074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.052344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.052360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.052593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.052609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.052754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.052769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.052930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.052945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.053123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.053139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.053301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.053316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.053568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.053600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.053730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.053761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.053970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.054005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.054279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.054535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.054550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.054716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.054731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.054913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.054946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.055236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.055268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.055460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.055491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.055756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.056043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.056059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.056149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.056162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.056381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.056396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.056667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.056682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.056939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.056963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.057136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.057152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.057408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.057424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.057701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.057716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.057970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.057985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.058149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.058165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.058408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.058423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.058569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.058584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.058842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.058874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.059176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.059209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.059414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.059430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.623 [2024-11-20 10:44:14.059676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.623 [2024-11-20 10:44:14.059708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.623 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.059977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.060011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.060305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.060336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.060610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.060643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.060908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.060939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.061244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.061261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.061477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.061491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.061758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.061774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.061972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.061988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.062229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.062261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.062450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.062482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.062683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.062715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.062912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.062943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.063232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.063264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.063536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.063551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.063698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.063713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.063964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.063981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.064142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.064158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.064320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.064335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.064618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.064634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.064826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.064842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.065082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.065099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.065358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.065374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.065544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.065563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.065826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.065841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.066130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.066146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.066404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.066419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.066667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.066682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.066904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.066919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.067164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.067446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.067461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.067670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.067686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.067903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.067918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.068082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.068098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.068248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.068263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.068472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.068487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.068648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.068664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.068913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.068929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.069152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.069169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.069352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.069368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.069582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.069596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.069707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.624 [2024-11-20 10:44:14.069721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.624 qpair failed and we were unable to recover it. 00:27:13.624 [2024-11-20 10:44:14.069991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.070007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.070199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.070216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.070418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.070699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.070714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.070836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.070851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.071095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.071111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.071401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.071433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.071647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.071679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.071946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.071994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.072292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.072308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.072468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.072484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.072647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.072662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.072901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.072933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.073245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.073282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.073534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.073549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.073726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.073741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.073970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.073987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.074254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.074288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.074489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.074521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.074696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.074728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.075001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.075035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.075291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.075307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.075572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.075587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.075731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.075746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.075933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.075952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.076213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.076246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.076511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.076542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.076670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.076701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.076982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.077025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.077225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.077241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.077385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.077400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.077667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.077699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.077899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.077931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.078098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.078131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.078427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.078442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.078654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.078669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.078883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.078898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.079068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.079085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.079263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.079417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.079432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.079590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.079605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.079794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.079809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.080072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.080088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.080287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.080319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.080536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.080568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.080756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.080788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.625 qpair failed and we were unable to recover it. 00:27:13.625 [2024-11-20 10:44:14.081041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.625 [2024-11-20 10:44:14.081079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.081275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.081307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.081551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.081566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.081810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.081826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.081929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.082140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.082155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.082328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.082343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.082503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.082518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.082711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.082754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.082970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.083003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.083204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.083237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.083426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.083441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.083587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.083601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.083746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.083761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.083924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.083939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.084096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.084111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.084357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.084372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.084559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.084575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.084856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.084871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.085094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.085112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.085359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.085391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.085660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.085691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.086015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.086050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.086334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.086366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.086589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.086620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.087046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.087080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.087334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.087350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.087612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.087628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.087841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.087856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.088091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.088110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.088271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.088287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.088476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.088491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.088734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.088765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.089040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.089077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.089305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.089338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.089618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.089651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.089782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.089816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.090091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.090124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.090332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.090365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.090558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.090573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.090757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.090772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.091057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.091073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.091323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.091339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.091504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.091519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.091760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.091792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.092045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.092078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.626 [2024-11-20 10:44:14.092341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.626 [2024-11-20 10:44:14.092373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.626 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.092675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.092707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.092977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.093016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.093302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.093335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.093601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.093617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.093726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.093741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.093930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.093945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.094207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.094223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.094417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.094433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.094579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.094594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.094850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.094887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.095110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.095145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.095329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.095361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.095635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.095666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.095944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.095986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.096177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.096193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.096412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.096427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.096659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.096674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.096945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.097002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.097190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.097221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.097424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.097456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.097733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.097764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.098003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.098018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.098203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.098218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.098455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.098470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.098715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.098730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.098876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.098892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.099138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.099171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.099385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.099683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.099715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.099931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.099976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.100205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.100237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.100464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.100479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.100713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.100728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.100979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.100997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.101216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.101232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.101401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.101417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.101524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.101545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.101762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.101777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.101963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.101979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.102169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.102184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.102383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.102415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.102701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.102732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.102870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.102901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.103040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.103073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.103327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.103343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.103428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.103442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.103529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.103542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.627 qpair failed and we were unable to recover it. 00:27:13.627 [2024-11-20 10:44:14.103687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.627 [2024-11-20 10:44:14.103703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.103812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.103916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.103931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.104091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.104107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.104258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.104273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.104529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.104544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.104688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.104703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.104938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.104965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.105140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.105155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.105394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.105409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.105621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.105637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.105814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.105983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.105999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.106165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.106181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.106426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.106441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.106719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.106735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.106997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.107014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.107241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.107256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.107378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.107393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.107538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.107553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.107782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.107814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.108037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.108069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.108345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.108376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.108664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.108695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.108982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.109025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.109314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.109354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.109542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.109785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.109818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.110026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.110085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.110346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.110378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.110620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.110652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.110902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.110934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.111201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.111217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.111398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.111413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.111584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.111615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.111922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.111963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.112162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.112206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.112446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.112461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.112698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.112713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.112871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.112886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.113132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.113149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.113309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.113325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.113567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.113599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.113876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.628 [2024-11-20 10:44:14.113907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.628 qpair failed and we were unable to recover it. 00:27:13.628 [2024-11-20 10:44:14.114208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.114242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.114508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.114539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.114819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.114849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.115139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.115172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.115451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.115483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.115771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.115802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.116101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.116141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.116402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.116418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.116597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.116612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.116784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.116799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.116961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.116980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.117216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.117231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.117393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.117589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.117608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.117766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.117781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.118001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.118018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.118267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.118283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.118463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.118478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.118731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.118746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.118982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.118998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.119241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.119256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.119434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.119449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.119723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.119754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.120019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.120354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.120369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.120532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.120547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.120783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.120813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.121073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.121109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.121359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.121395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.121581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.121596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.121840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.121855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.122036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.122052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.122214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.122229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.122466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.122481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.122649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.122664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.122926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.122942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.123101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.123117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.123330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.123346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.123505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.123519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.123698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.123713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.123865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.123884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.124059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.124075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.124194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.124209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.124368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.124382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.124695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.124726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.124986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.125022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.125143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.125158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.125369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.629 [2024-11-20 10:44:14.125384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.629 qpair failed and we were unable to recover it. 00:27:13.629 [2024-11-20 10:44:14.125549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.125565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.125807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.125838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.126103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.126136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.126316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.126347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.126566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.126581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.126833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.126864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.127095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.127129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.127404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.127436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.127719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.127735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.127971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.127987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.128226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.128242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.128354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.128370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.128556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.128571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.128763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.128795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.128996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.129033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.129311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.129343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.129618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.129650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.129851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.129883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.130131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.130164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.130434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.130449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.130679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.130695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.130909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.130925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.131091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.131106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.131261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.131291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.131502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.131534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.131732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.131763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.132040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.132073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.132352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.132392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.132580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.132595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.132686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.132699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.132870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.132885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.133043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.133060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.133342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.133374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.133672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.133704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.133937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.133994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.134289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.134321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.134599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.134615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.134795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.134810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.135024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.135040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.135294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.135326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.135596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.135627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.135921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.135969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.136229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.136244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.136490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.136505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.136709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.136956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.136976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.137202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.137235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.137444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.630 [2024-11-20 10:44:14.137476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.630 qpair failed and we were unable to recover it. 00:27:13.630 [2024-11-20 10:44:14.137602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.137633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.137906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.137937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.138233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.138265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.138540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.138571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.138865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.138897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.139114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.139147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.139390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.139405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.139640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.139655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.139923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.139939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.140108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.140124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.140323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.140355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.140610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.140641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.140872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.140910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.141200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.141236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.141537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.141831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.141863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.142070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.142104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.142249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.142265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.142421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.142436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.142600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.142614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.142862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.142877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.143043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.143058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.143223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.143238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.143466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.143481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.143725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.143740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.143970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.143986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.144235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.144250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.144354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.144369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.144600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.144615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.144836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.144851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.145011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.145222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.145238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.145395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.145411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.145588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.145603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.145747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.145762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.145920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.145935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.146209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.146224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.146385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.146400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.146656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.146671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.146855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.146893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.147117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.147152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.147346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.147377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.147562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.147578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.147764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.147780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.147886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.147903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.148134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.148149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.148391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.148406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.631 [2024-11-20 10:44:14.148570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.631 [2024-11-20 10:44:14.148585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.631 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.148828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.148859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.149069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.149105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.149310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.149342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.149533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.149549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.149807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.149823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.150051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.150067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.150326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.150342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.150482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.150736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.150750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.150837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.150851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.151041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.151058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.151211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.151225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.151429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.151462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.151660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.151692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.151897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.151929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.152196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.152229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.152431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.152463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.152736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.152750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.153016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.153037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.153197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.153213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.153369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.153404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.153514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.153546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.153735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.153767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.154026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.154059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.154282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.154314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.154536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.154551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.154793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.154808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.154986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.155002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.155168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.155183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.155429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.155461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.155666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.155698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.155945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.155987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.156188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.156220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.156426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.156467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.156637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.156652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.156835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.156849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.157086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.157103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.157296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.157489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.157506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.157729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.157761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.157990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.158023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.158317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.158358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.158594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.158610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.632 qpair failed and we were unable to recover it. 00:27:13.632 [2024-11-20 10:44:14.158827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-11-20 10:44:14.158843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.159059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.159077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.159360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.159375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.159540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.159555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.159730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.159766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.160049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.160081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.160360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.160391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.160612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.160628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.160862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.161021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.161038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.161268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.161284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.161410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.161428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.161585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.161600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.161816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.161833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.162050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.162067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.162233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.162250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.162494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.162510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.162697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.162712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.162889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.162904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.163124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.163141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.163361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.163376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.163607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.163622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.163806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.163822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.163923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.163937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.164103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.164118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.164369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.164384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.164621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.164636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.164876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.164891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.165054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.165167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.165181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.165342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.165358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.165460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.165476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.165644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.165659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.165904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.165917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.166164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.166180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.166365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.166378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.166558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.166571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.166796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.166811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.167028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.167045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.167191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.167204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.167351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.167366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.167578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.167592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.167752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.167766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.167929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.167955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.168057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.168071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.168152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.168165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.168329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.168343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.168419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-11-20 10:44:14.168432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.633 qpair failed and we were unable to recover it. 00:27:13.633 [2024-11-20 10:44:14.168667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.168858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.168872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.169125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.169142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.169334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.169349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.169578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.169593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.169756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.169770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.169993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.170008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.170244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.170260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.170421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.170436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.170678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.170692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.170864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.170880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.171032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.171048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.171214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.171228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.171485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.171500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.171723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.171739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.171896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.171910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.172095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.172111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.172183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.172198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.172357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.172372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.172463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.172478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.172719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.172733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.172888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.172902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.173093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.173114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.173275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.173291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.173454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.173470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.173725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.173741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.173891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.173906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.174149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.174166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.174309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.174324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.174571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.174604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.174810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.174844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.175123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.175159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.175452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.175468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.175576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.175592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.175832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.175850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.176087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.176104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.176293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.176308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.176575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.176592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.176744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.176760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.176917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.176932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.177131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.177404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.177436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.177720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.177752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.177867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.177897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.178189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.178224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.178335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.178348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.178540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.178555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.178724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-11-20 10:44:14.178770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.634 qpair failed and we were unable to recover it. 00:27:13.634 [2024-11-20 10:44:14.178973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.179009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.179308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.179340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.179629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.179645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.179876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.179891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.180136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.180151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.180313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.180330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.180601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.180638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.180918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.180963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.181234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.181266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.181467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.181498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.181780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.181814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.181938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.182001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.182204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.182238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.182433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.182466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.182661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.182676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.182965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.183000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.183142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.183176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.183370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.183401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.183683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.183723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.183929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.183985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.184184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.184216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.184414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.184447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.184697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.184713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.184996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.185014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.185239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.185258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.185493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.185512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.185760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.185943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.186237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.186270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.186530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.186562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.186854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.186870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.187057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.187074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.187297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.187333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.187566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.187598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.187886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.187919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.188233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.188266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.188512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.188529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.188732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.188749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.189000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.189019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.189257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.189274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.189516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.189532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.189624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.189639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.189811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.189833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-11-20 10:44:14.190052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-11-20 10:44:14.190071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.190284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.190301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.190516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.190532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.190721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.190964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.191248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.191282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.191553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.191590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.191868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.191884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.192122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.192140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.192311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.192327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.192570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.192602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.192810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.192843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.193105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.193145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.193331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.193539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.193554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.193642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.193928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.194119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.194136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.194306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.194322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.194518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.194539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.194637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.194651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.194802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.194818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.195051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.195069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.195340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.195355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.195578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.195594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.195833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.195850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.196042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.196251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.196267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.196357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.196373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.196598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.196614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.196777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.196793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.196971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.196989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.197218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.197234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.197390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.197407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.197634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.197651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.197924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.197940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.198199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.198435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.198453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.198668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.198685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.198851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.198867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.199088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.199106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.199267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.199284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.199512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.199526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.199690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.199707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.199868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.199884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.200047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.200064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.200275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.200291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.200462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.200479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-11-20 10:44:14.200699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-11-20 10:44:14.200715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.200956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.200975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.201202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.201219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.201466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.201483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.201577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.201591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.201763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.201782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.201965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.201986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.202137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.202155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.202374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.202391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.202583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.202600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.202840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.202857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.203113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.203130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.203292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.203309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.203552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.203569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.203748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.203765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.203937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.203963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.204146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.204179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.204387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.204421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.204621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.204672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.204856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.204872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.204962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.204977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.205144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.205162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.205326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.205343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.205514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.205529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.205607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.205623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.205808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.205838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.206098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.206141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.206380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.206418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.206599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.206616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.206847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.206864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.207033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.207050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.207212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.207229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.207393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.207408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.207629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.207888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.207904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.208065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.208082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.208322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.208339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.208532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.208549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.208696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.208712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.208799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.208814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.209064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.209081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.209171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.209185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.209347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.209365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.209610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.209627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.209807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.209822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.210016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.210033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.210262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.210294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.210494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.210527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-11-20 10:44:14.210805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-11-20 10:44:14.210841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.211147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.211184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.211457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.211491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.211698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.211716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.211963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.211982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.212224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.212241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.212407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.212422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.212667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.212683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.212904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.212938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.213146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.213180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.213450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.213483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.213743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.213777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.214101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.214137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.214265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.214304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.214529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.214561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.214825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.214857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.215133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.215175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.215457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.215491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.215792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.215823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.216116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.216150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.216372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.216405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.216547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.216579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.216781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.216796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.216973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.216991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.217176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.217191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.217393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.217565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.217581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.217806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.217822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.218011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.218028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.218191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.218208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.218422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.218455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.218609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.218641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.218894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.218928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.219265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.219305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.219563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.219594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.219805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.220008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.220026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.220290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.220321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.220596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.220631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.220935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.221008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.221264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.221300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.221567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.221606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.221825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.221842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.222010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.222027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.222117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.222131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.222360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.222546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.222561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.222806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.222823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-11-20 10:44:14.223055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-11-20 10:44:14.223072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.223341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.223362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.223529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.223545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.223825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.223860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.223984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.224032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.224288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.224321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.224540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.224556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.224734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.224752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.224979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.224995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.225262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.225279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.225433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.225449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.225660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.225694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.225892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.225924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.226193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.226229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.226507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.226525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.226775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.226793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.227019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.227037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.227204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.227219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.227450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.227485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.227772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.227809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.228109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.228147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.228431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.228463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.228611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.228644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.228971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.228989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.229142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.229159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.229305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.229320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.229405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.229669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.229685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.229789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.229803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.229881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.229896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.230081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.230100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.230263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.230279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.230476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.230494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.230655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.230671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.230833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.230848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.231018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.231036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.231200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.231216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.231483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.231500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.231665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.231682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.231928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.231945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.232052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.232067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.232243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.232260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.232511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.232545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-11-20 10:44:14.232747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-11-20 10:44:14.232779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.233033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.233068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.233284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.233316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.233445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.233460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.233644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.233660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.233807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.233823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.233914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.233928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.234121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.234137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.234315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.234330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.234480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.234495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.234751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.234767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.234918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.234934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.235191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.235207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.235449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.235464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.235628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.235643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.235910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.235962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.236180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.236213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.236429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.236705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.236739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.236881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.236912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.237183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.237252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.237606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.237646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.237930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.237977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.238187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.238221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.238377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.238412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.238684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.238719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.238924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.238968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.239213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.239246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.239391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.239425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.239717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.239751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.239937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.239983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.240182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.240216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.240408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.240449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.240729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.240763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.241065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.241100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.241356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.241390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.241644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.241679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.241875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.241908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.242127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.242162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.242460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.242494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.242752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.242793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.243005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.243041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.243329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.243641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.243658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.243926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.243997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.244202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.244242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.244469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.244504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.244736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-11-20 10:44:14.244771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-11-20 10:44:14.245096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.245132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.245409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.245633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.245650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.245870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.245905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.246108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.246142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.246325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.246358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.246631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.246668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.246967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.247004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.247267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.247304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.247520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.247566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.247805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.247825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.248045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.248066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.248252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.248271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.248372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.248390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.248588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.248623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.248825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.248865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.249160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.249203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.249335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.249369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.249496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.249529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.249650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.249668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.249855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.249873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.250162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.250200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.250359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.250397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.250595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.250631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.250923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.250973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.251200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.251235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.251439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.251472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.251691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.251710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.251855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.251873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.252033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.252053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.252214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.252234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.252451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.252470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.252694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.252713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.252988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.253023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.253235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.253272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.253474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.253517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.253743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.253778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.254056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.254092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.254324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.254357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.254619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.254654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.254908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.254928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.255201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.255218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.255453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.255471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.255622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.255640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.255858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.255893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.256107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.256148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.256398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.256435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-11-20 10:44:14.256634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-11-20 10:44:14.256672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.256881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.256918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.257205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.257239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.257441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.257477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.257760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.257794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.258000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.258038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.258245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.258280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.258535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.258572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.258775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.258794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.259060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.259103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.259352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.259392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.259689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.259707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.259904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.259922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.260154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.260174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.260344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.260362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.260613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.260654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.260985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.261021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.261306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.261343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.261600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.261634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.261897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.262128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.262162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.262362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.262397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.262598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.262634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.262827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.262860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.263140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.263177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.263297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.263328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.263526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.263562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.263768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.263800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.264049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.264069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.264251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.264269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.264509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.264545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.264731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.264765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.264973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.265208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.265227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.265529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.265564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.265809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.265827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.265930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.265946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.266143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.266176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.266426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.266460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.266667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.266701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.266969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.266988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.267204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.267222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.267443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.267562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.267577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.267746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.267792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.267988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.268027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.268244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.268280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.268552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-11-20 10:44:14.268587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-11-20 10:44:14.268854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.268888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.269184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.269219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.269414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.269450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.269635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.269668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.269882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.269899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.270084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.270102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.270356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.270389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.270581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.270616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.270898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.270933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.271167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.271200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.271404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.271439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.271625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.271662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.271875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.271910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.272198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.272235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.272510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.272553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.272651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.272667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.272914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.272961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.273170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.273420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.273454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.273648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.273682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.273863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.273883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.274103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.274123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.274299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.274317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.274581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.274790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.274807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.275038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.275058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.275301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.275320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.275465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.275481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.275728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.275761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.275872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.275906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.276124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.276162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.276457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.276532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.276547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.276785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.276803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.276973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.276992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.277242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.277262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.277443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.277479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.277735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.277772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.277970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.277989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.278213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.278246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.278431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.278464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.278662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.278681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.278923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.278940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-11-20 10:44:14.279041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-11-20 10:44:14.279057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.279254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.279273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.279502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.279521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.279687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.279706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.279790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.279807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.279999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.280189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.280226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.280482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.280515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.280709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.280727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.280909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.280943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.281146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.281180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.281370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.281403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.281671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.281688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.281963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.281981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.282132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.282149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.282351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.282521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.282556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.282749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.282783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.282973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.283010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.283204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.283244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.283460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.283496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.283677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.283712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.283848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.283882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.284086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.284107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.284293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.284310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.284473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.284658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.284675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.284922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.284988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.285177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.285210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.285327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.285363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.285544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.285562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.285824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.285842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.286029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.286047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.286236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.286253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.286405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.286424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.286670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.286687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.286930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.286977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.287281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.287322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.287539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.287582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.287868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.287903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.288196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.288238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.288563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.288601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.288809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.288859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.289022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.289043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.289207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.289227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.289423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.289442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.289703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.289745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-11-20 10:44:14.289873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-11-20 10:44:14.289911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.290210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.290248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.290531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.290571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.290764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.290799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.291055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.291090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.291369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.291405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.291714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.291986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.292026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.292337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.292371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.292507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.292542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.292796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.292812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.293071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.293089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.293333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.293351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.293525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.293543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.293809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.293826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.293972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.293989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.294102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.294119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.294268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.294285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.294534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.294569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.294864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.294898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.295112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.295147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.295427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.295461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.295667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.295702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.295887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.295904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.296152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.296176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.296262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.296280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.296497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.296534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.296770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.296804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.296999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.297036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.297296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.297331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.297632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.297667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.297854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.297873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.298089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.298108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.298346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.298364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.298538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.298555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.298782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.298800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.298944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.298968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.299164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.299199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.299405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.299438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.299722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.299758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.299887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.299923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.300107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.300144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.300422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.300457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.300753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.300772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.300998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.301016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.301176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.301209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.301416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.301452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.301731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.301765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.301970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-11-20 10:44:14.302006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-11-20 10:44:14.302197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.302232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.302441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.302475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.302660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.302693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.302919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.302965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.303201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.303431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.303465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.303641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.303659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.303908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.303926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.304168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.304187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.304371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.304405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.304592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.304625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.304893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.304927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.305248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.305527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.305559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.305843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.305881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.306161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.306197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.306435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.306468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.306779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.306796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.307048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.307072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.307293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.307310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.307471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.307488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.307639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.307658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.307804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.307822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.307964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.307986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.308159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.308176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.308362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.308400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.308617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.308651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.308852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.308885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.309024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.309042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.309203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.309221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.309385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.309402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.309569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.309587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.309682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.309702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.309958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.309976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.310150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.310168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.310333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.310351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.310494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.310510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.310589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.310622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.310823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.310856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.310986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.311022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.311232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.311266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.311485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.311517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.311616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.311630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.311717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-11-20 10:44:14.311733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-11-20 10:44:14.311881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.311900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.312149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.312174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.312396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.312436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.312636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.312668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.312799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.312816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.312916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.312932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.313022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.313282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.313315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.313508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.313540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.313722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.313755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.313999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.314017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.314125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.314142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.314241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.314256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.314449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.314481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.314673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.314705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.314852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.314885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.315103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.315120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.315205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.315221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.315454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.315471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.315637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.315654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.315730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.315768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.315968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.316011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.316156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.316190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.316440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.316473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.316694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.316726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.317070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.317273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.317308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.317565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.317598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.317725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.317741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.317900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.317918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.318039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.318056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.318133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.318148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.318254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.318270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.318418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.318432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.318639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.318671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.318881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.318913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.319176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.319211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.319424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.319458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.319714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.319731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.319906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.319922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.320171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.320191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.320361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.320408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.320797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.320876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.321203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.321244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.321503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.321538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.321809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.321843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.322157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-11-20 10:44:14.322196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-11-20 10:44:14.322477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.322519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.322701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.322718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.322961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.322979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.323087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.323104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.323290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.323307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.323539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.323556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.323718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.323734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.323962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.323983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.324181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.324199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.324369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.324387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.324651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.324669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.324845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.324862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.324967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.324983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-11-20 10:44:14.325127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-11-20 10:44:14.325145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.936 [2024-11-20 10:44:14.325390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-11-20 10:44:14.325408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-11-20 10:44:14.325609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.325626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.325784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.325802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.325961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.325979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.326195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.326212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.326294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.326310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.326522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.326539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.326633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.326648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.326804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.326821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.326988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.327005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.327281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.327522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.327539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.327780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.327798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.327963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.327985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.328231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.328249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.328424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.328442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.328621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.328638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.328808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.328826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.329006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.329024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.329200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.329217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.329467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.329485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.329723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.329740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.329962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.329981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.330168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.330202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.330455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.330489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.330698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.330715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.330866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.330883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.331105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.331140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.331281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.331314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.331529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.331563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.331749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.331765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.331996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.332015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.332241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.332259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.332529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.332563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.332811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.332828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.332937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.332974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.333153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.333171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.333284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.333301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-11-20 10:44:14.333547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-11-20 10:44:14.333581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.333763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.333797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.333980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.334015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.334296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.334313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.334596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.334630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.334834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.334867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.335142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.335160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.335369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.335385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.335537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.335570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.335843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.335876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.336161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.336202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.336477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.336512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.336790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.336825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.337064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.337188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.337204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.337443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.337460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.337645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.337663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.337861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.337893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.338112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.338148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.338334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.338368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.338552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.338586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.338838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.338872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.339151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.339187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.339469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.339502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.339758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.339797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.340099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.340118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.340384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.340400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.340615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.340633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.340849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.340866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.341024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.341042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.341256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.341273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.341490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.341507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.341770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.341804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.342143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.342179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.342464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.342498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.342772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.342789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.343013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.343324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.343358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-11-20 10:44:14.343645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-11-20 10:44:14.343680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.343958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.343976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.344191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.344209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.344425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.344442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.344723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.344770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.344912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.344946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.345096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.345129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.345320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.345353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.345630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.345663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.345945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.345970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.346202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.346219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.346460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.346478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.346722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.346739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.346921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.346938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.347133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.347150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.347334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.347351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.347586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.347604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.347855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.347889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.348131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.348166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.348419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.348454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.348792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.348984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.349003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.349158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.349191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.349486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.349520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.349782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.349815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.350112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.350130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.350351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.350368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.350609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.350626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.350867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.350884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.351027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.351046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.351290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.351324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.351606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.351640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.351840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.351857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.352015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.352034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.352190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.352207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.352374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.352408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.352601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.352635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.352867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-11-20 10:44:14.352901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-11-20 10:44:14.353232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.353268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.353487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.353521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.353818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.353852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.354124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.354160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.354411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.354445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.354651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.354685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.354961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.354996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.355211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.355245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.355547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.355579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.355811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.355845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.356092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.356111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.356264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.356282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.356433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.356450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.356644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.356677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.356873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.356890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.357139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.357158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.357310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.357330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.357516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.357550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.357771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.357804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.358081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.358118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.358306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.358339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.358601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.358635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.358824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.358841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.359093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.359128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.359338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.359372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.359550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.359585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.359799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.359817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.360001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.360019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.360190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.360207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.360429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.360472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.360731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.360766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.361065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.361099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.361361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.361396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.361597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.361631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.361910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.361943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.362154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.362188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.362387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.362421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.362715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.362749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-11-20 10:44:14.362967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-11-20 10:44:14.363001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.363182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.363199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.363454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.363472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.363714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.363731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.363958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.363978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.364155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.364176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.364343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.364360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.364585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.364619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.364840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.364874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.365149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.365184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.365467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.365500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.365779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.365812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.366093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.366129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.366390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.366552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.366585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.366857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.366902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.367069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.367087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.367253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.367270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.367494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.367794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.367827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.368008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.368026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.368252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.368285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.368536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.368569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.368841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.368874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.369077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.369112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.369365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.369399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.369591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.369624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.369822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.369839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.370082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.370099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.370260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.370294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.370543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.370576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.370768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.370801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.371077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-11-20 10:44:14.371116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-11-20 10:44:14.371396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.371429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.371706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.371740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.371937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.371980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.372173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.372207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.372389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.372422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.372611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.372644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.372856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.372873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.373103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.373121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.373305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.373322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.373510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.373529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.373632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.373648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.373760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.373777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.374058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.374092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.374302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.374335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.374547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.374580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.374862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.374879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.374995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.375012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.375176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.375193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.375302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.375318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.375410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.375425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.375688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.375874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.375908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.376131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.376166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.376351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.376369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.376461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.376477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.376645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.376661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.376819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.376989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.377008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.377166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.377184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.377747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.377780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.377973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.377992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.378188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.378206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.378400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.378418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.378582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.378599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.378768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.378786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.378939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.378964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.379148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.379165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.379427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.379445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-11-20 10:44:14.379722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-11-20 10:44:14.379739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.379972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.379990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.380218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.380235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.380382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.380400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.380638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.380655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.380812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.380828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.381002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.381020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.381213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.381230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.381411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.381429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.381611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.381628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.381811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.381829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.381999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.382016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.382165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.382181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.382427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.382445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.382641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.382657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.382872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.382890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.383135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.383154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.383315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.383332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.383533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.383552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.383784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.383819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.384011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.384046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.384188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.384221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.384496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.384529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.384780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.384814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.385096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.385115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.385282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.385300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.385410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.385427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.385599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.385616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.385799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.385817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.385975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.385997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.386107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.386122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.386317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.386335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.386597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.386615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.386881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.386898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.387089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.387107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.387324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.387341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.387504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.387521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.387679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.387696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-11-20 10:44:14.387877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-11-20 10:44:14.387895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.388069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.388087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.388238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.388255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.388487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.388726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.388743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.388923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.388940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.389087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.389105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.389361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.389378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.389522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.389539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.389754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.389771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.389934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.389960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.390148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.390164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.390283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.390301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.390513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.390714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.390747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.391061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.391079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.391236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.391253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.391347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.391525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.391546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.391719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.391752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.392026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.392062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.392348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.392381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.392611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.392645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.392895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.392929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.393244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.393277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.393486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.393519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.393805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.393837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.394042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.394078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.394268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.394303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.394568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.394602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.394791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.394824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.395024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.395059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.395268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.395285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.395477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.395510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.395785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.395818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.396106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.396125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.396344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.396361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.396584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.396602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-11-20 10:44:14.396826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-11-20 10:44:14.396843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.397027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.397045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.397215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.397232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.397457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.397474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.397721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.397739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.397920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.397937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.398144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.398163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.398380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.398397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.398593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.398611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.398702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.398718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.398867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.398884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.399102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.399120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.399362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.399379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.399573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.399589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.399818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.399835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.400084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.400101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.400349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.400366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.400596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.400614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.400833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.400850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.401027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.401045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.401262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.401278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.401500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.401517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.401608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.401651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.401944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.401987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.402214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.402247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.402460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.402493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.402794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.402826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.403025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.403060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.403314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.403347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.403563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.403596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.403795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.403828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.404100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.404135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.404266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.404299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.404578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.404611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.404863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.404897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.405157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.405175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.405381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.405399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.405490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-11-20 10:44:14.405506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-11-20 10:44:14.405674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.405706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.405971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.406006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.406190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.406227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.406409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.406425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.406570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.406587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.406694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.406711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.406909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.406926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.407085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.407103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.407320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.407353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.407464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.407497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.407795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.407842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.408057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.408074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.408244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.408261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.408373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.408613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.408630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.408744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.408763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.408956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.408992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.409141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.409175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.409455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.409489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.409635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.409669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.409855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.409887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.410205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.410240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.410422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.410455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.410661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.410694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.410896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.410932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.411251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.411285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.411561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.411593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.411868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.411902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.412135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.412172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.412342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.412362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.412526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.412814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.412857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.413137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.413157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.413425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-11-20 10:44:14.413443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-11-20 10:44:14.413645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.413662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.413917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.413967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.414181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.414441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.414481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.414696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.414730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.414985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.415022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.415319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.415353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.415650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.415685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.415958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.415993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.416250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.416285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.416590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.416623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.416811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.416846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.417127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.417145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.417268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.417286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.417532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.417568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.417818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.417851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.418055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.418091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.418378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.418412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.418600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.418636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.418898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.418933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.419201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.419236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.419423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.419440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.419583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.419602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.419813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.419831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.419987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.420005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.420255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.420272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.420417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.420434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.420594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.420611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.420763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.420781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.420863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.420879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.420995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.421049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.421264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.421298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.421497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.421532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.421834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.421869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.422165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.422200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.422409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.422443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.422740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.422777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-11-20 10:44:14.423056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-11-20 10:44:14.423075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.423260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.423295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.423486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.423522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.423716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.423749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.423933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.423958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.424083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.424102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.424255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.424274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.424504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.424537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.424812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.424847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.424982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.425019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.425265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.425284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.425506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.425523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.425617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.425633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.425672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4af0 (9): Bad file descriptor 00:27:13.948 [2024-11-20 10:44:14.426078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.426157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.426464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.426502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.426752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.426791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.427095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.427134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.427273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.427291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.427512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.427529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.427755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.427772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.427865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.427881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.427976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.427992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.428082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.428098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.428266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.428307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.428510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.428546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.428678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.428713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.428914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.428931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.429044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.429063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.429158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.429172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.429346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.429380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.429650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.429684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.429872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.430073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.430095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.430251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.430269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.430383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.430400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.430561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.430580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.430754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.430788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.431052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-11-20 10:44:14.431090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-11-20 10:44:14.431344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.431378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.431685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.431718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.432021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.432057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.432188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.432205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.432380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.432403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.432928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.432982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.433274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.433307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.433580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.433620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.433898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.433945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.434127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.434147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.434314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.434332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.434574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.434611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.434837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.434872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.435111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.435150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.435419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.435457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.435648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.435683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.435873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.435909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.436101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.436126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.436416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.436436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.436621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.436663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.436896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.436932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.437189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.437208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.437437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.437455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.437626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.437643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.437745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.437761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.437937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.437984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.438263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.438297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.438586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.438624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.438897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.438933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.439085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.439119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.439253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.439288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.439486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.439520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.439799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.439845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.440111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.440131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.440315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.440334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.440601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.440625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.440905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-11-20 10:44:14.440924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-11-20 10:44:14.441133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.441155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.441323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.441340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.441520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.441539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.441778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.441813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.442006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.442045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.442189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.442207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.442430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.442449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.442615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.442730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.442747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.442976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.442996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.443170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.443192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.443431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.443449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.443567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.443588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.443754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.443773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.444015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.444036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.444277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.444295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.444375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.444394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.444610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.444630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.444800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.444817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.444989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.445009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.445262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.445280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.445473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.445490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.445650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.445668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.445846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.445863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.446098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.446118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.446319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.446343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.446495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.446513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.446684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.446702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.446881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.446899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.446991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.447010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.447128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.447144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.447394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.447414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.447639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.447661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-11-20 10:44:14.447832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-11-20 10:44:14.447850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.448007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.448028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.448275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.448293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.448511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.448528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.448732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.448751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.448994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.449013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.449110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.449126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.449324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.449341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.449443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.449459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.449626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.449643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.449824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.449843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.449993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.450013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.450282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.450301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.450475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.450495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.450641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.450660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.450845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.450864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.451067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.451087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.451273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.451292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.451551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.451570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.451790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.451809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.451963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.451983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.452107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.452125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.452310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.452327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.452582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.452599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.452787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.452805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.452966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.452986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.453135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.453154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.453301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.453320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.453542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.453561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.453735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.453756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.453926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.454060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.454079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.454243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.454262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.454371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.454389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.454495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.454514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.454698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.454717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.454963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.454982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.455145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.455166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-11-20 10:44:14.455291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-11-20 10:44:14.455309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.455535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.455554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.455733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.455750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.455992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.456176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.456194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.456358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.456375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.456580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.456599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.456713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.456730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.456997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.457017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.457244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.457262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.457442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.457459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.457700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.457717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.457874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.457894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.458069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.458088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.458180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.458197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.458444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.458462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.458627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.458644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.458883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.458901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.459155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.459175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.459405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.459422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.459578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.459597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.459761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.459779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.459959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.459979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.460143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.460162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.460270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.460287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.460372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.460389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.460493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.460509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.460682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.460701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.460850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.461013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.461031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.461274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.461292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.461390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.461407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.461645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.461662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.461878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.461898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.461987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.462004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.462186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.462203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.462375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.462393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.462614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.462630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.462872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-11-20 10:44:14.462891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-11-20 10:44:14.463109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.463126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.463219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.463235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.463457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.463475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.463651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.463667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.463820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.463838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.464089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.464109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.464352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.464370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.464626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.464643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.464878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.464897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.465131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.465150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.465317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.465338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.465582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.465600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.465847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.465864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.465968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.465986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.466134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.466152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.466375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.466394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.466654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.466672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.466844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.466861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.467148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.467168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.467336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.467355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.467526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.467542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.467653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.467672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.467787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.467805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.468029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.468050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.468268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.468286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.468442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.468461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.468610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.468628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.468900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.468919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.469094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.469112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.469190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.469206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.469351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.469369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.469613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.469630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.469789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.469808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.470025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.470045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.470283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.470301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.470468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.470488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.470640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.470658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.470902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-11-20 10:44:14.470920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-11-20 10:44:14.471046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.471066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.471214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.471233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.471395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.471411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.471634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.471808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.471826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.471974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.471993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.472212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.472231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.472383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.472402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.472659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.472678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.472840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.472857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.473096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.473114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.473263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.473282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.473373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.473388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.473631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.473648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.473806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.473823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.473987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.474007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.474260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.474278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.474470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.474489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.474701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.474720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.474961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.474979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.475075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.475091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.475331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.475347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.475541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.475558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.475669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.475832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.475850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.476004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.476023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.476246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.476264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.476510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.476528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.476777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.476794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.476962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.476982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.477129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.477149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.477312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.477329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.477504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.477521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.477731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.477907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.477926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.478103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.478120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.478270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.478288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-11-20 10:44:14.478432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-11-20 10:44:14.478451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.478670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.478687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.478913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.478931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.479108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.479129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.479402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.479419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.479663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.479681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.479913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.479933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.480180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.480199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.480468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.480487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.480720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.480740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.480896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.480915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.481112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.481130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.481335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.481354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.481601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.481619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.481860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.481878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.482041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.482061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.482279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.482296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.482591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.482608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.482818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.482837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.483080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.483098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.483252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.483271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.483476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.483494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.483717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.483737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.483999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.484018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.484204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.484223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.484443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.484460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.484554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.484569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.484809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.484826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.484906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.484924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.485094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.485112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.485288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.485312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.485559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.485720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.485738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.485893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.485911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.486089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-11-20 10:44:14.486111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-11-20 10:44:14.486352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.486369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.486477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.486494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.486643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.486662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.486914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.486932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.487125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.487142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.487290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.487309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.487403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.487418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.487741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.487820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.488070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.488112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.488359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.488394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.488684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.488881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.488899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.489061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.489079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.489234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.489252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.489331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.489347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.489518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.489536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.489655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.489673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.489865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.489881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.490032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.490053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.490242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.490259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.490406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.490424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.490510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.490526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.490692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.490715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.490884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.490902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.491129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.491148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.491295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.491313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.491418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.491434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.491592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.491828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.491847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.492023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.492044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.492224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.492243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.492340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.492355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.492472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.492490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.492710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.492730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.492825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.492841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.493082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.493103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.493307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.493326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.493523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-11-20 10:44:14.493542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-11-20 10:44:14.493633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.493649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.493755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.493770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.493867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.493881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.494963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.494983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.495208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.495227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.495343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.495360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.495448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.495468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.495621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.495638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.495737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.495754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.495833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.495848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.496006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.496027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.496113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.496129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.496366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.496384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.496534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.496551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.496718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.496736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.496907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.496925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.497049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.497065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.497218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.497237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.497500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.497580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.497823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.497864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.498126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.498163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.498405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.498428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.498582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.498600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.498971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.498991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.499087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.499104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.499270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.499291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.499382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.499398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.499546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.499562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.499730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.499747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.499902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 10:44:14.499920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-11-20 10:44:14.500104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.500124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.500282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.500301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.500403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.500575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.500594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.500772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.500789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.500879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.500896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.501054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.501074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.501245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.501263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.501456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.501474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.501628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.501645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.501729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.501744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.501860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.501884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.502055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.502240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.502259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.502421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.502443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.502668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.502686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.502966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.502985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.503138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.503157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.503332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.503350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.503581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.503600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.503843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.503861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.504028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.504049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.504290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.504309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.504497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.504516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.504679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.504696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.504853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.504871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.505104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.505124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.505281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.505300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.505549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.505567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.505736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.505754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.505910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.505927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.506204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.506228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.506487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.506503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.506663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.506680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.506899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.506916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.507077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.507097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.507257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-11-20 10:44:14.507274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-11-20 10:44:14.507497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.507515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.507757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.507774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.507966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.507985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.508261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.508279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.508516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.508539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.508687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.508706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.508814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.508832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.509003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.509023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.509265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.509284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.509451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.509470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.509636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.509653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.509873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.509891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.509982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.509999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.510232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.510249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.510417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.510435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.510649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.510903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.511083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.511102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.511265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.511282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.511463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.511481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.511709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.511727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.511992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.512009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.512173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.512193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.512451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.512468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.512737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.512753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.512974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.512995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.513174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.513193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.513279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.513296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.513549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.513567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.513805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.513822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.513980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.513997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.514229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.514250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.514411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.514429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.514611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.514628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.514860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.514878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.515124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.515143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.515357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.515374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-11-20 10:44:14.515643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-11-20 10:44:14.515660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.515804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.515822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.515971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.515990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.516154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.516172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.516409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.516426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.516665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.516682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.516846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.516863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.517043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.517061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.517213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.517231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.517459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.517475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.517712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.517728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.517876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.517893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.518095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.518113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.518227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.518245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.518417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.518434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.518617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.518634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.518855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.518872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.519163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.519180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.519294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.519311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.519526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.519543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.519690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.519707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.519868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.519885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.520153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.520172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.520389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.520406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.520646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.520662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.520924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.520940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.521225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.521244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.521336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.521351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.521550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.521567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.521719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.521736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.521973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.521992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.522081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.522097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.522261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.522278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.522427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.522444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.522699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-11-20 10:44:14.522716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-11-20 10:44:14.522988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.523006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.523168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.523186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.523419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.523436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.523707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.523725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.523910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.523927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.524121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.524139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.524287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.524304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.524469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.524486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.524721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.524738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.524904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.524921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.525102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.525121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.525307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.525563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.525581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.525836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.525853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.526127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.526146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.526376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.526394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.526565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.526582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.526761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.526778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.527030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.527048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.527217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.527234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.527479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.527495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.527660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.527678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.527938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.527962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.528169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.528186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.528259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.528275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.528515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.528533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.528678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.528695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.528857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.528877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.529025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.529045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.529226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.529244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.529506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.529523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.529621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.529638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.529714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.529729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.529915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.529931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.530288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.530365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.530665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.530702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.531004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.531043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.531239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-11-20 10:44:14.531260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-11-20 10:44:14.531453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.531470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.531654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.531672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.531888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.531905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.532074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.532092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.532282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.532299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.532501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.532518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.532696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.532713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.532961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.532985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.533216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.533233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.533449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.533466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.533623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.533639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.533887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.533904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.534143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.534161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.534351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.534368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.534670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.534819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.534836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.535091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.535113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.535273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.535430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.535447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.535541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.535556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.535795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.535811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.535966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.535983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.536215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.536234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.536459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.536476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.536629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.536646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.536886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.536903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.537061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.537080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.537234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.537250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.537467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.537484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.537633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.537831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.537849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.538013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.538032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.538195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.538212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.538445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.538462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.538638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.538655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.538871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.538888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.539163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.539181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-11-20 10:44:14.539286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-11-20 10:44:14.539304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.539514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.539530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.539700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.539717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.539941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.539964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.540175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.540192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.540440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.540456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.540643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.540889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.540907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.541066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.541084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.541311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.541329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.541420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.541435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.541596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.541613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.541826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.541843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.542058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.542077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.542315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.542333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.542597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.542614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.542858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.542875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.543102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.543265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.543282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.543498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.543514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.543677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.543694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.543841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.543858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.544044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.544062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.544301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.544319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.544562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.544579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.544817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.544834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.544945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.544970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.545196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.545213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.545293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.545309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.545465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.545482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.545722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.545739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.545983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.546003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.546249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.546266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.546432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.546449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.546614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.546632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.546877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.546894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.547111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.547129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.547218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.547234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.547463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:44:14.547480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-11-20 10:44:14.547574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.547868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.547885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.548088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.548105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.548368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.548384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.548480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.548495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.548758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.548775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.549026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.549043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.549255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.549272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.549448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.549465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.549710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.549727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.549868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.549884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.550063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.550080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.550164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.550179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.550342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.550359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.550546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.550563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.550793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.550810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.551079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.551097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.551322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.551339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.551550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.551567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.551839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.551856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.552118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.552135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.552322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.552339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.552605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.552621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.552727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.552742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.552989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.553184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.553201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.553343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.553359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.553504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.553521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.553762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.553778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.554012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.554254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.554271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.554432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.554449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.554626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.554643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.554831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:44:14.554848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-11-20 10:44:14.554932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.554966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.555118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.555139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.555398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.555414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.555642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.555659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.555892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.555908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.556018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.556035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.556272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.556288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.556429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.556445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.556609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.556625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.556768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.556785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.556961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.556978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.557087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.557103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.557369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.557385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.557570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.557586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.557825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.557842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.558079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.558098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.558278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.558555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.558571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.558728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.558745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.558893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.558909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.558993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.559009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.559186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.559203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.559304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.559319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.559553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.559570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.559805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.559821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.559919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.559933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.560152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.560170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.560313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.560329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.560412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.560430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.560662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.560678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.560836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.560853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.561070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.561087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.561243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.561260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.561407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.561424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.561604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.561621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.561864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.561881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.562129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.562147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-11-20 10:44:14.562371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:44:14.562388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.562538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.562554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.562771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.562787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.563001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.563018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.563183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.563199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.563413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.563429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.563637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.563653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.563900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.563917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.564092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.564266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.564283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.564522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.564538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.564743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.564759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.565027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.565045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.565220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.565238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.565410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.565426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.565516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.565532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.565786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.565802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.565962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.565982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.566071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.566088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.566299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.566315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.566571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.566587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.566766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.566782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.567057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.567074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.567290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.567306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.567466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.567482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.567661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.567677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.567849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.567867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.568026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.568043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.568288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.568304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.568461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.568477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.568703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.568719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.568813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.568829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.569058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.569075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.569311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.569328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.569571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.569588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.569729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.569745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.569917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.569933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.570170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-11-20 10:44:14.570187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-11-20 10:44:14.570349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.570366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.570454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.570470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.570705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.570721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.570858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.570874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.571107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.571124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.571371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.571386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.571620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.571636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.571740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.571757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.571915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.571932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.572227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.572481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.572514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.572712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.572745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.573019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.573038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.573197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.573213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.573440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.573456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.573600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.573617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.573782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.574009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.574306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.574323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.574512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.574528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.574762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.574778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.574968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.574986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.575170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.575187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.575336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.575352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.575427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.575443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.575650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.575666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.575819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.575834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.576017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.576034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.576297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.576313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.576526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.576542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.576726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.576743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.576955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.576972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.577117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.577133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.577307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.577323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.577548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.577665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.577681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.577904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.577921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-11-20 10:44:14.578106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-11-20 10:44:14.578124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.578350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.578366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.578453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.578469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.578625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.578641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.578847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.578863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.578945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.578979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.579165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.579181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.579336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.579352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.579582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.579598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.579772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.579788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.579960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.579978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.580212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.580232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.580469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.580485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.580691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.580707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.580943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.581194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.581348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.581365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.581618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.581635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.581859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.581876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.582082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.582100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.582211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.582227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.582333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.582349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.582523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.582539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.582715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.582731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.582873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.582889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.583073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.583090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.583269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.583284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.583452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.583467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.583633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.583898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.583914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.584150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.584167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.584263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.584279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.584428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.584445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.584714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.584731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.584915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.584931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.585019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.585036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.585205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.585221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.585376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.585393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-11-20 10:44:14.585627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-11-20 10:44:14.585646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.585801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.585817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.586036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.586055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.586214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.586230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.586395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.586411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.586643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.586659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.586742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.586758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.586990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.587007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.587213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.587229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.587365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.587381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.587529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.587545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.587694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.587711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.587859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.587875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.588014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.588031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.588225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.588241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.588340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.588356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.588526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.588542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.588681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.588697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.588850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.588866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.589007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.589024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.589234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.589250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.589411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.589426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.589635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.589651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.589740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.589756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.589929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.589945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.590140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.590157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.590259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.590275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.590439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.590456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.590606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.590622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.590897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.590913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.591072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.591290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.591306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.591463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.591479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.591711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.591727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.591959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.591975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-11-20 10:44:14.592207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-11-20 10:44:14.592223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.592307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.592323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.592459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.592475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.592680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.592696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.592963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.593129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.593145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.593216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.593232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.593461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.593477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.593680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.593696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.593870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.593885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.594972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.594989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.595241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.595257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.595425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.595441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.595693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.595709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.595784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.595800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.595941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.595963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.596119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.596135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.596287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.596303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.596458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.596474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.596636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.596652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.596791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.596807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.597011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.597029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.597236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.597252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.597332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.597348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.597552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.597584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.597797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.597813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.598017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.598035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.598283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.598304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.598466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.598769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.599061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.599077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.599233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-11-20 10:44:14.599248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-11-20 10:44:14.599405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.599595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.599612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.599746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.599761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.600014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.600030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.600176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.600192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.600330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.600346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.600530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.600711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.600727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.600876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.600892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.601150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.601168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.601249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.601265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.601475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.601491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.601665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.601680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.601889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.601905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.602088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.602105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.602326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.602343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.602544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.602560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.602765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.602782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.603019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.603037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.603251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.603266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.603478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.603494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.603672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.603688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.603890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.603909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.604056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.604073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.604210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.604226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.604328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.604345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.604575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.604591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.604817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.604834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.604982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.604999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.605184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.605199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.605352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.605368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.605533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.605548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.605799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.606066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.606083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.606265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.606282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.606448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.606463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.606606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.606622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-11-20 10:44:14.606757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-11-20 10:44:14.606773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.606981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.606998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.607202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.607218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.607447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.607462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.607610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.607626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.607782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.607797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.607895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.607910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.608144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.608160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.608307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.608323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.608415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.608431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.608670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.608685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.608850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.608865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.609103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.609122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.609327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.609342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.609497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.609513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.609725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.609741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.609908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.610076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.610092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.610263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.610279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.610436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.610602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.610618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.610838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.610854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.611077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.611094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.611297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.611312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.611572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.611588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.611762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.611778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.612000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.612017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.612218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.612235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.612463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.612478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.612681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.612697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.612859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.612875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.613081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.613097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.613268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.613284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.613421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.613437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.613570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.613586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.613815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.613831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.614084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.614100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-11-20 10:44:14.614200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-11-20 10:44:14.614216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.614304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.614320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.614465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.614480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.614631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.614647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.614799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.614814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.614987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.615006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.615248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.615265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.615423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.615439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.615645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.615661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.615870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.615886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.616090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.616107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.616191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.616206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.616350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.616366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.616577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.616593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.616837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.616854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.617007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.617024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.617183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.617394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.617410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.617629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.617644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.617854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.617869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.618078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.618094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.618389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.618405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.618618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.618633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.618885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.618901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.619066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.619084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.619242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.619258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.619432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.619448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.619635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.619869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.619884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.620040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.620056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.620210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.620226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.620408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.620425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.620561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.620578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-11-20 10:44:14.620657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-11-20 10:44:14.620673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.620847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.620862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.621087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.621104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.621285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.621301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.621533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.621549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.621622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.621638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.621790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.621807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.622945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.622983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.623237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.623252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.623488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.623503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.623681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.623697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.623918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.623934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.624108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.624124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.624360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.624376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.624602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.624617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.624792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.624807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.625015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.625032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.625295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.625310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.625452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.625468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.625670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.625685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.625820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.625835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.625998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.626014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.626242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.626259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.626414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.626430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.626514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.626530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.626615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.626631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.626866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.626881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.627086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.627104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.627334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.627349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.627515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.627531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.627631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-11-20 10:44:14.627647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-11-20 10:44:14.627782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.627801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.628040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.628064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.628265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.628282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.628507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.628522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.628605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.628621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.628832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.628848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.629075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.629100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.629329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.629345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.629548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.629563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.629711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.629727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.629875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.629891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.630042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.630058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.630261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.630277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.630521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.630536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.630621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.630636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.630708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.630723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.630886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.630901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.631045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.631062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.631306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.631322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.631473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.631489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.631628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.631643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.631775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.631791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.631872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.631889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.632116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.632132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.632280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.632296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.632498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.632514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.632739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.632755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.632987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.633004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.633097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.633112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.633250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.633267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.633513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.633529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.633675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.633690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.633823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.633839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.633985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.634001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.634152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.634168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.634326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.634475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.634492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-11-20 10:44:14.634713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-11-20 10:44:14.634728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.634953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.634974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.635111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.635127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.635205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.635220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.635424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.635441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.635652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.635668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.635826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.635841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.636048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.636213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.636233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.636483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.636499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-11-20 10:44:14.636643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-11-20 10:44:14.636659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:14.263 [2024-11-20 10:44:14.636861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.263 [2024-11-20 10:44:14.636876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.637044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.637061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.637239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.637255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.637485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.637503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.637682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.637696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.637854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.637868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.638094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.638108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.638337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.638352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.638610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.638625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.638796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.638811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.638963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.638983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.639164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.639179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.639296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.639310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.639495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.639509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.639710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.639724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.639938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.639957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.640190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.640415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.640430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.640683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.640698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.640921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.640936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.641107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.641125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.641303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.641317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.641520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.641534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.641753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.641768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.641921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.642162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.642177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.642399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.642414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.642565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.642579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.642661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.642675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.642838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.642852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.643095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.643111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.643283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.643298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.643447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.643462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.643553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.643567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.643732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.643746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.643883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.643897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.644047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.644062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.644306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-11-20 10:44:14.644321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-11-20 10:44:14.644544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.644558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.644651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.644665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.644753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.644767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.644997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.645013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.645161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.645175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.645397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.645411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.645562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.645577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.645676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.645690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.645837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.645851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.646069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.646087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.646299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.646313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.646474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.646751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.646766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.646902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.646916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.647169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.647185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.647405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.647419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.647618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.647632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.647798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.647813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.647973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.647988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.648141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.648156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.648299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.648313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.648453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.648467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.648623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.648637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.648785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.648799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.648977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.648993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.649143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.649158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.649300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.649314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.649411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.649425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.649602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.649617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.649767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.649781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.650033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.650048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.650141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.650155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.650304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.650318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.650457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.650471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.650674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.650689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.650834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.650848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.651063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.651081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-11-20 10:44:14.651229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-11-20 10:44:14.651243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.651388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.651402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.651538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.651553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.651796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.651811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.651912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.651927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3645580 Killed "${NVMF_APP[@]}" "$@" 00:27:14.266 [2024-11-20 10:44:14.652102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.652118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.652320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.652335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.652483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.652497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.652737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.652752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:14.266 [2024-11-20 10:44:14.652966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.652984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.653116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.653131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:14.266 [2024-11-20 10:44:14.653303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.653317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.266 [2024-11-20 10:44:14.653461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.653480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.653706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.653721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.266 [2024-11-20 10:44:14.653893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.653907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.266 [2024-11-20 10:44:14.654064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.654079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.654281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.654295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.654493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.654508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.654657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.654670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.654764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.654778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.654928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.654942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.655031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.655045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.655118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.655132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.655324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.655338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.655439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.655453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.655670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.655684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.655856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.655870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.656136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.656154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.656257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.656481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.656495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.656744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.656758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.656967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.656987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.657092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.657107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.657264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.657278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.657380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.657395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-11-20 10:44:14.657668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-11-20 10:44:14.657682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.657885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.657898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.658043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.658060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.658248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.658263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.658400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.658415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.658521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.658534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.658849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.659065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.659080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.659228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.659243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.659472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.659486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.659689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.659704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.659937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.660130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.660144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.660309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.660323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.660526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.660543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.660648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.660664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.660824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3646330 00:27:14.267 [2024-11-20 10:44:14.660841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.661042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3646330 00:27:14.267 [2024-11-20 10:44:14.661233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.661389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.661498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3646330 ']' 00:27:14.267 [2024-11-20 10:44:14.661597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.661752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.267 [2024-11-20 10:44:14.661913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.661928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.662144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.267 [2024-11-20 10:44:14.662160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.662311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.662327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.662408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.662422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.267 [2024-11-20 10:44:14.662792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.662809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.267 [2024-11-20 10:44:14.663061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.663077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.663280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.663295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.663443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.663457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.663554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.663569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.663662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.663676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.663852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.663868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-11-20 10:44:14.664037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-11-20 10:44:14.664054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.664196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.664212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.664390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.664406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.664556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.664571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.664774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.664791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.664875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.664891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.665062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.665079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.665180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.665198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.665365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.665382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.665583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.665599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.665732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.665749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.665914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.665931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.666109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.666178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.666422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.666490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.666825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.666881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.667080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.667099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.667248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.667265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.667418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.667433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.667669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.667684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.667864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.667878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.667964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.667980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.668080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.668094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.668184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.668199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.668300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.668316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.668409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.668424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.668646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.668661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.668939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.668964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.669061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.669075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.669182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.669197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.669333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.669349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.669429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.669444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.669522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.669541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-11-20 10:44:14.669694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-11-20 10:44:14.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.669855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.669870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.670967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.670982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.671182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.671196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.671338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.671352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.671506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.671523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.671789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.671804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.671969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.671985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.672142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.672157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.672235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.672250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.672388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.672404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.672558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.672573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.672740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.672941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.672967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.673124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.673142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.673236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.673250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.673365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.673380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.673524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.673538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.673741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.673755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-11-20 10:44:14.673897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-11-20 10:44:14.673914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.674021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.674143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.674260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.674505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.674754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.674852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.674988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.675005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.675097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.675111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.675208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.675223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.675424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.675439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.675560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.675575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.675748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.675765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.676000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.676016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.676218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.676234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.676405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.676420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.676656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.676672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.676822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.676837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.677983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.677999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.678117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.678286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.678372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.678633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.678796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.678902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-11-20 10:44:14.678991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-11-20 10:44:14.679007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.679092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.679107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.679253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.679268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.679429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.679443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.679587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.679602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.679776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.679791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.679941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.679964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.680975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.681077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.681091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.681227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.681242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.681468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.681483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.681697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.681712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.681865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.681880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.682090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.682106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.682211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.682225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.682376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.682391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.682571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.682585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.682813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.682828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.682905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.682919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.683094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.683251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.683351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.683448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.683532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-11-20 10:44:14.683643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-11-20 10:44:14.683854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.683870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.684940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.684967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.685813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.685829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.686079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.686094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.686209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.686316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.686332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.686418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.686432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.686660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.686674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.686772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.686786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.687936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.687957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.688044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.688057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.688143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.688158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.688237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.688252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.688395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.688498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.688513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-11-20 10:44:14.688592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-11-20 10:44:14.688606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.688678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.688692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.688792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.688806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.688893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.688908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.688990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.689933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.689956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.690961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.690977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.691907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.691923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.692004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.692020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.692088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.692103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.692186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.692201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.692284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-11-20 10:44:14.692301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-11-20 10:44:14.692367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.692461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.692549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.692637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.692790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.692878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.692975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.692991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.693975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.693990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.694945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.694966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.695057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.695147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.695229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.695386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.695468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-11-20 10:44:14.695555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-11-20 10:44:14.695626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.695641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.695773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.695787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.695852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.695866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.696937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.696963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.697919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.697933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.698897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.698911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.699081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.699097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.699171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.699186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.699255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.699270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.699337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.699351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-11-20 10:44:14.699421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-11-20 10:44:14.699436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.699533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.699549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.699626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.699640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.699774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.699789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.699878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.699892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.699971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.699985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.700867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.700882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.701909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.701924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.702857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.702871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-11-20 10:44:14.703830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-11-20 10:44:14.703986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.704932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.704952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.705917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.705989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.706918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.706932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.707858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.707873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-11-20 10:44:14.708006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-11-20 10:44:14.708021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.708917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.708934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709291] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:27:14.278 [2024-11-20 10:44:14.709304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709329] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.278 [2024-11-20 10:44:14.709382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.709969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.709984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.710861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.710876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.711033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.711050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.711130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.711145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-11-20 10:44:14.711226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-11-20 10:44:14.711241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.711375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.711389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.711523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.711539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.711698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.711713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.711846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.711861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.711932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.711977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.712942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.712964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.713942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.713962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.714904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.714918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-11-20 10:44:14.715739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-11-20 10:44:14.715887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.715902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.715986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.716852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.716990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.717956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.717971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.718899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.718913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-11-20 10:44:14.719875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-11-20 10:44:14.719981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.719997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.720917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.720931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.721953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.721968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.722938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.722976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-11-20 10:44:14.723956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-11-20 10:44:14.723971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.724851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.724864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.725905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.725989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.726983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.726999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.727941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.727962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.728040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.728055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.728138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.728152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-11-20 10:44:14.728231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-11-20 10:44:14.728244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.728966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.728981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.729845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.729986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.730911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.730924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-11-20 10:44:14.731834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-11-20 10:44:14.731913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.731927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.732956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.732971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.733908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.733922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.734980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.734996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.735079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.735165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.735273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.735355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.735439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-11-20 10:44:14.735523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-11-20 10:44:14.735589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.735602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.735747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.735761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.735899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.735913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.736959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.736975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.737955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.737974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.738916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.738930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.739078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.739097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.739248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.739264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.739412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.739427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.739493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.739507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-11-20 10:44:14.739582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-11-20 10:44:14.739597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.739691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.739706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.739773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.739859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.739874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.739939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.739981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.740956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.740971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.741982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.741998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.742932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.742946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.743038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.743053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.743184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.743199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-11-20 10:44:14.743348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-11-20 10:44:14.743362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.743433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.743448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.743590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.743604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.743694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.743708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.743917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.743931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.744960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.744975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.745883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.745897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.746965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.746983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.747170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.747278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.747492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.747648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.747733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-11-20 10:44:14.747824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-11-20 10:44:14.747924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.747938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.748867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.748881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.749016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.749031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.749181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.749351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.749373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.749552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.749566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.749716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.749730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.749862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.749877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.750821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.750992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-11-20 10:44:14.751799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-11-20 10:44:14.751880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.751895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.752934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.752954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.753957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.753978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.754912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.754926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.755970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-11-20 10:44:14.755986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-11-20 10:44:14.756065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.756880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.756894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.757930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.757944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.758914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.758987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.759934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.759956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.760043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.760056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.760140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-11-20 10:44:14.760155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-11-20 10:44:14.760233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.760247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.760383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.760397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.760531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.760545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.760630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.760645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.760826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.760840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.760934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.760955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.761916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.761932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.762913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.762981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.763863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.763877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.764015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.764031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-11-20 10:44:14.764170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-11-20 10:44:14.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.764335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.764349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.764418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.764433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.764499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.764513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.764653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.764669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.764807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.764820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.764913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.764926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.765997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.766965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.766983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.767905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.767919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.768017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.768121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.768211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.768304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.768384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-11-20 10:44:14.768475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-11-20 10:44:14.768540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.768555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.768626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.768640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.768822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.768837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.768901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.768915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.768986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.769926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.769942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.770882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.770897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.771871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.771885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.772042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.772057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.772199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.772213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.772280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-11-20 10:44:14.772294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-11-20 10:44:14.772372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.772386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.772452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.772467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.772534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.772549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.772701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.772717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.772815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.772833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.772898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.772912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.773978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.773994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.294 [2024-11-20 10:44:14.774150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.774926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.774941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.775886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.775902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-11-20 10:44:14.776884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-11-20 10:44:14.776898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.776977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.776992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.777916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.777930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.778013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.778027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.778166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.778180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.778278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.778293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.778475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.778490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.778683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.778734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.778860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.778894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.779944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.779974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.780859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.781047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.781121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.781136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.781272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.781288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.781373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.781389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.781465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.781480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-11-20 10:44:14.781611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-11-20 10:44:14.781625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.781703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.781719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.781787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.781801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.781937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.781959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.782978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.782997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.783977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.783994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.784166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.784181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.784316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.784332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.784504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.784518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.784653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.784667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.784751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.784766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.784926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.784941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.785905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.785921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.786069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.786085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.786157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.786172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.786253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.786267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.786345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.786360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.786496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-11-20 10:44:14.786511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-11-20 10:44:14.786646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.786662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.786798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.786816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.786902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.786917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.786983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.786998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.787942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.787970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.788938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.788959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.789928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.789943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.790914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.790929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.791010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-11-20 10:44:14.791025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-11-20 10:44:14.791158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.791173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.791233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.791248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.791469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.791484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.791551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.791567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.791703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.791717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.791861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.791876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.792796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.792812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.793886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.793901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-11-20 10:44:14.794802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-11-20 10:44:14.794934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.794956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.795183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.795198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.795411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.795425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.795597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.795613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.795817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.795835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.795984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.796927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.796942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.797155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.797171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.797313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.797328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.797474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.797489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.797638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.797654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.797796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.797812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.797986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.798962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.798976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.799960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.799978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.800129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-11-20 10:44:14.800145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-11-20 10:44:14.800215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.800303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.800453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.800597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.800694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.800838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.800919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.800933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.801052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.801096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.801212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.801246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.801359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.801391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.801636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.801654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.801804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.801819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.801919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.801933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.802975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.802991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.803955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.803976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.804058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.804074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.804222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.804236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.804375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.804390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.804534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-11-20 10:44:14.804548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-11-20 10:44:14.804635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.804649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.804729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.804744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.804823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.804838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.804915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.804929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.805969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.805984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.806862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.806876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.807893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.807907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-11-20 10:44:14.808738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-11-20 10:44:14.808808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.808823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.808973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.808988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.809966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.809981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.810913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.810992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-11-20 10:44:14.811839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-11-20 10:44:14.811988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.812907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.812987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.813856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.813999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.814905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.814991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.815006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.815072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.815086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.815165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.815179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-11-20 10:44:14.815322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-11-20 10:44:14.815339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.815413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.815429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.815500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.815517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.815582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.815596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.815695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.815711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.815850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.815866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.815934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.815958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.816902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.816958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.304 [2024-11-20 10:44:14.816988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.304 [2024-11-20 10:44:14.816995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.304 [2024-11-20 10:44:14.817003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.304 [2024-11-20 10:44:14.817008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.304 [2024-11-20 10:44:14.817059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.817920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.817934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:14.304 [2024-11-20 10:44:14.818705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-11-20 10:44:14.818636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:14.304 [2024-11-20 10:44:14.818794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-11-20 10:44:14.818813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.818748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:14.305 [2024-11-20 10:44:14.818888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.818759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.305 [2024-11-20 10:44:14.818905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.818996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.819906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.819920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.820909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.820922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.821923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.821996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.822014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.822098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.822113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.822268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-11-20 10:44:14.822286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-11-20 10:44:14.822360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.822374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.822440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.822455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.822595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.822610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.822759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.822776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.822844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.822859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.822917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.822931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.823921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.823934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.824882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.824896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.825980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.825996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.826151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.826167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.826254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.826269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.826348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.826363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.826508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.826523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.826590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.826606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-11-20 10:44:14.826689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-11-20 10:44:14.826703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.826776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.826790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.826941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.826962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.827960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.827979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.828923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.828938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.829170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.829325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.829476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.829587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.829739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.829821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.829989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-11-20 10:44:14.830975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-11-20 10:44:14.830992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.831920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.831939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.832959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.832981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.833982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.833998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.834070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.834085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.834172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.834186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.834261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.834276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.834353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.834367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-11-20 10:44:14.834438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-11-20 10:44:14.834455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.834539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.834554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.834628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.834644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.834726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.834742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.834881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.834897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.834978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.834994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.835919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.835935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.836969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.836985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.837910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.837925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.838071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.838088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.838185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.838200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.838287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.838385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-11-20 10:44:14.838401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-11-20 10:44:14.838552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.838570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.838650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.838665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.838744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.838757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.838834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.838850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.838925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.838940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.839974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.839990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.840909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.840926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.841983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.841999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.842970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-11-20 10:44:14.842990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-11-20 10:44:14.843066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.843914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.843998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.844909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.844924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.845975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.845990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.846889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.846903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.847051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.847070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.847148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.847162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.847305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.311 [2024-11-20 10:44:14.847321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.311 qpair failed and we were unable to recover it. 00:27:14.311 [2024-11-20 10:44:14.847392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.847408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.847543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.847558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.847694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.847709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.847804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.847819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.847978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.847994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.848929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.848943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.849892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.849906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.850905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.312 [2024-11-20 10:44:14.851787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.312 qpair failed and we were unable to recover it. 00:27:14.312 [2024-11-20 10:44:14.851855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.851870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.851954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.851970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.852925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.852940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.853892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.853906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.313 [2024-11-20 10:44:14.854925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.313 [2024-11-20 10:44:14.854941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.313 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.855983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.855998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.856825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.856989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.857895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.857909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.314 [2024-11-20 10:44:14.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.314 qpair failed and we were unable to recover it. 00:27:14.314 [2024-11-20 10:44:14.858746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.858761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.858899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.858913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.859917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.859987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.860915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.860929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.861918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.861933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.862017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.862093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.862107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.862200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.862215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.862340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.862355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.315 qpair failed and we were unable to recover it. 00:27:14.315 [2024-11-20 10:44:14.862438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.315 [2024-11-20 10:44:14.862453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.862535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.862550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.862691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.862705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.862784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.862798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.862865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.862879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.862964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.862979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.863961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.863986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.864933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.864953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.865934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.865989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.866084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.866099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.866174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.866188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.316 [2024-11-20 10:44:14.866316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.316 [2024-11-20 10:44:14.866331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.316 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.866414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.866429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.866562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.866576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.866643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.866658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.866728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.866742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.866812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.866829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.866910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.866925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.867966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.867982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.868957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.868978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.317 [2024-11-20 10:44:14.869896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.317 qpair failed and we were unable to recover it. 00:27:14.317 [2024-11-20 10:44:14.869968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.870940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.870961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.871907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.871921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.872913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.872927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.318 [2024-11-20 10:44:14.873831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.318 [2024-11-20 10:44:14.873847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.318 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.873915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.873930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.874831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.874846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.875922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.875936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.876969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.876983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.319 qpair failed and we were unable to recover it. 00:27:14.319 [2024-11-20 10:44:14.877752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.319 [2024-11-20 10:44:14.877766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.877905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.877920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.877993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.878975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.878990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.879943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.879965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.880961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.880976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.320 [2024-11-20 10:44:14.881702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.320 [2024-11-20 10:44:14.881716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.320 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.881974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.881993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.882967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.882981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.883939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.883960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.884912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.884999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.885853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.321 [2024-11-20 10:44:14.885998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.321 [2024-11-20 10:44:14.886014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.321 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.886982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.886997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.887909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.887923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.888916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.888931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.889021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.889036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.889189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.889205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.322 [2024-11-20 10:44:14.889279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.322 [2024-11-20 10:44:14.889293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.322 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.889977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.889993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.890975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.890990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.891929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.891943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.892893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.892908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.893046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.893061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.893130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.893143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.893209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.323 [2024-11-20 10:44:14.893223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.323 qpair failed and we were unable to recover it. 00:27:14.323 [2024-11-20 10:44:14.893302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.893395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.893493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.893570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.893658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.893807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.893904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.893918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.894872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.894886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.895977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.895993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.324 [2024-11-20 10:44:14.896813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.324 qpair failed and we were unable to recover it. 00:27:14.324 [2024-11-20 10:44:14.896899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.896913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.897931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.897944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.898964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.898978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.899942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.899963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.900057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.900072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.900157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.900170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.325 [2024-11-20 10:44:14.900234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.325 [2024-11-20 10:44:14.900248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.325 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.900924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.900937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.901928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.901942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.902979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.902995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.903061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.903074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.903157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.903170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.903235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.903250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.903317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.903330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.326 [2024-11-20 10:44:14.903400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.326 [2024-11-20 10:44:14.903414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.326 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.903487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.903502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.903577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.903590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.903670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.903684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.903821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.903835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.903915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.903928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.904907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.904921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.905932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.905946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.906033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.906048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.906132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.906146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.906225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.906238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.906328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.906343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.906481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.327 [2024-11-20 10:44:14.906495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.327 qpair failed and we were unable to recover it. 00:27:14.327 [2024-11-20 10:44:14.906557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.906571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.906665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.906678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.906754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.906768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.906837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.906852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.906918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.906931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.907093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.907158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.907274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.907307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.907437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.907470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.907587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.907603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.907736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.907750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.907912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.907927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.908960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.908974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.909943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.909969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.910116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.910131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.910221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.910235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.910370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.328 [2024-11-20 10:44:14.910383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.328 qpair failed and we were unable to recover it. 00:27:14.328 [2024-11-20 10:44:14.910454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.910468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.910539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.910553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.910631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.910645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.910716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.910729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.910803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.910817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.910978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.910994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.911936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.911957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.912956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.912971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.913036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.913049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.913205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.913220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.913298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.913313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.913397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.913411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.913492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.913506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.329 qpair failed and we were unable to recover it. 00:27:14.329 [2024-11-20 10:44:14.913580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.329 [2024-11-20 10:44:14.913594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.913669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.913683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.913825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.913839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.913913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.913927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.914975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.914992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.915911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.915925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.330 [2024-11-20 10:44:14.916849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.330 qpair failed and we were unable to recover it. 00:27:14.330 [2024-11-20 10:44:14.916992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.917981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.917995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.918863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.918877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.919899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.919913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.331 qpair failed and we were unable to recover it. 00:27:14.331 [2024-11-20 10:44:14.920690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.331 [2024-11-20 10:44:14.920703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.920846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.920860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.920939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.920962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.921932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.921945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.922774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.332 [2024-11-20 10:44:14.922854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.922867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:14.332 [2024-11-20 10:44:14.923349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.332 [2024-11-20 10:44:14.923696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.332 [2024-11-20 10:44:14.923799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.332 qpair failed and we were unable to recover it. 00:27:14.332 [2024-11-20 10:44:14.923932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.923961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.333 [2024-11-20 10:44:14.924039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.333 [2024-11-20 10:44:14.924289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.924982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.924999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.925964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.925980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.926938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.926961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.927941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.333 [2024-11-20 10:44:14.927962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.333 qpair failed and we were unable to recover it. 00:27:14.333 [2024-11-20 10:44:14.928039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.928124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.928218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.928318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.928542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.928636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.928857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.928872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.929872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.929886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.930867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.930882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.931982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.931998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.932081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.932096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.932180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.334 [2024-11-20 10:44:14.932195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.334 qpair failed and we were unable to recover it. 00:27:14.334 [2024-11-20 10:44:14.932328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.932343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.932551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.932625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.932639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.932813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.932827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.933927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.933942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.934955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.934971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.935944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.935966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.936050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.936065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.936266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.936280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.936357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.335 [2024-11-20 10:44:14.936371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.335 qpair failed and we were unable to recover it. 00:27:14.335 [2024-11-20 10:44:14.936444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.936463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.936546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.936560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.936634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.936648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.936717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.936731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.936811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.936825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.936963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.936979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.937916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.937931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.938960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.938976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.939938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.939962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.940115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.940130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.940215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.940229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.940298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.940312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.940379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.336 [2024-11-20 10:44:14.940394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.336 qpair failed and we were unable to recover it. 00:27:14.336 [2024-11-20 10:44:14.940525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.940540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.940614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.940628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.940706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.940720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.940801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.940815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.940880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.940895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.940964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.940983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.941857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.941998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.942970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.942985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.337 [2024-11-20 10:44:14.943770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.337 [2024-11-20 10:44:14.943784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.337 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.943852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.943867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.943937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.943957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.944941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.944967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.945918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.945933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.946979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.946994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.947068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.947082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.947145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.947159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.947232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.947246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.947320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.947334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.947405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.338 [2024-11-20 10:44:14.947418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.338 qpair failed and we were unable to recover it. 00:27:14.338 [2024-11-20 10:44:14.947500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.947514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.947578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.947591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.947657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.947671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.947746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.947761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.947832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.947846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.947922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.947936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.948928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.948944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.949915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.949999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.339 [2024-11-20 10:44:14.950729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.339 qpair failed and we were unable to recover it. 00:27:14.339 [2024-11-20 10:44:14.950810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.950824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.950916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.950930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.951961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.951975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.952909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.952994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.340 [2024-11-20 10:44:14.953825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.340 [2024-11-20 10:44:14.953840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.340 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.953905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.953921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.954936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.954979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.955945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.955968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.956899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.956987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.957003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.957074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.957088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.341 [2024-11-20 10:44:14.957159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.341 [2024-11-20 10:44:14.957175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.341 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.957904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.958963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.958978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.342 [2024-11-20 10:44:14.959414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.342 [2024-11-20 10:44:14.959811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.959908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.959922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.960003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.960019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.342 [2024-11-20 10:44:14.960110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.960126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.960217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.960232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.960305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.342 [2024-11-20 10:44:14.960320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.342 qpair failed and we were unable to recover it. 00:27:14.342 [2024-11-20 10:44:14.960409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.343 [2024-11-20 10:44:14.960494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.960585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.960666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.960762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.960843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.960930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.960945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.961925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.961939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.962977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.962998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.343 [2024-11-20 10:44:14.963903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.343 [2024-11-20 10:44:14.963918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.343 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.963998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.964153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.964244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.964329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.964433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.964508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.344 [2024-11-20 10:44:14.964588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.344 [2024-11-20 10:44:14.964602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.344 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.964670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.964684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.964753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.964767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.964841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.964856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.964929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.964942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-11-20 10:44:14.965670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-11-20 10:44:14.965684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.965764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.965779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.965855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.965869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.965958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.965976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.966972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.966988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.967973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.967989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.968060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.968075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.968146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.968160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.968239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.968254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.968321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.968334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.968409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-11-20 10:44:14.968423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-11-20 10:44:14.968501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.968516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.968588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.968602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.968667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.968680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.968749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.968763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.968831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.968845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.968912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.968926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.969954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.969973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.970906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.970987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.971002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.971075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.971090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.971158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.971172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.971240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.971254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.971349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-11-20 10:44:14.971363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-11-20 10:44:14.971439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.971522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.971615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.971699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.971780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.971863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.971946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.971969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.972930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.972943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.973962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.973983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.974963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.974977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.975046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.975061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.975253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.975270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.975333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.975357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.975491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.975504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-11-20 10:44:14.975586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-11-20 10:44:14.975601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.975666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.975680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.975750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.975764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.975852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.975866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.975965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.975980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.976956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.976971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.977908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.977922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.978981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.978998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.979085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.979099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.979231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.979246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.979314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.979328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-11-20 10:44:14.979406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-11-20 10:44:14.979421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.979491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.979505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.979602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.979616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.979711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.979726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.979804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.979817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.979887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.979901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.979982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.980924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.980938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.981843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.981859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-11-20 10:44:14.982836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-11-20 10:44:14.982851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.982918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.982932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.983905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.983919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.984964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.985903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.985985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.986000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.986078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-11-20 10:44:14.986091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-11-20 10:44:14.986161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.986960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.986979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.987939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.987961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.988929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.988944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-11-20 10:44:14.989708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-11-20 10:44:14.989773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.989787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.989987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.990825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.990839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.991870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.991884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.992944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.992964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 Malloc0 00:27:14.620 [2024-11-20 10:44:14.993633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.993957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.993971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.994038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-11-20 10:44:14.994053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-11-20 10:44:14.994208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.994302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.621 [2024-11-20 10:44:14.994392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.994536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.994627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:14.621 [2024-11-20 10:44:14.994776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.994882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.995031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.621 [2024-11-20 10:44:14.995131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.995282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 10:44:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.621 [2024-11-20 10:44:14.995456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.995624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.995770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.995867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.995972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.995988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.996913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-11-20 10:44:14.997846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-11-20 10:44:14.997917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.997931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.998870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:14.999967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:14.999984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.000936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.000957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.001032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.001046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.001120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.001134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.001167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.622 [2024-11-20 10:44:15.001208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.001221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.001300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.001312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-11-20 10:44:15.001404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-11-20 10:44:15.001418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.001559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.001575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.001661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.001674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.001743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.001757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.001842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.001857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.001917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.001931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.002962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.002978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.003920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.003934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6420000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.004924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.004938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.005015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.005029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.005163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.005176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.005259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.005273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-11-20 10:44:15.005422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-11-20 10:44:15.005436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.005503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.005517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.005581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.005665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.005679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.005814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.005829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.005909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.005923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.006929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.006944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.007900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.007914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.008914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.008928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.009086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.009101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.009188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.009203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.009277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.009291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.009494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.009509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.009578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-11-20 10:44:15.009593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-11-20 10:44:15.009725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.009740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.009805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.009820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.625 [2024-11-20 10:44:15.009892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.009907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.009978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.010079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.010164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.625 [2024-11-20 10:44:15.010389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.010488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.625 [2024-11-20 10:44:15.010577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.010658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.010806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.625 [2024-11-20 10:44:15.010888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.010903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.011778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.011991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.012913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.012928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.013966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.013981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.014131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.014146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-11-20 10:44:15.014309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-11-20 10:44:15.014324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.014423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.014505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.014593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.014677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.014771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.014917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.014985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.015924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.015938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.016956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.016972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.017847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.626 [2024-11-20 10:44:15.017953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.017969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.018104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.018118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.018200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.018214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.626 [2024-11-20 10:44:15.018348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.018363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.018498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.018513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.626 [2024-11-20 10:44:15.018595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-11-20 10:44:15.018611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-11-20 10:44:15.018707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.018722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.627 [2024-11-20 10:44:15.018928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.018944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.019899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.019913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.020909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.020923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.021974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.021989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-11-20 10:44:15.022834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-11-20 10:44:15.022848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.022939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.022962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.023981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.023996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.024973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.024988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.025828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.025843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.628 [2024-11-20 10:44:15.026049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.026141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.026239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.628 [2024-11-20 10:44:15.026322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.026471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.026635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.628 [2024-11-20 10:44:15.026650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.026725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.026871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.628 [2024-11-20 10:44:15.026958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.026979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.027118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.628 [2024-11-20 10:44:15.027132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-11-20 10:44:15.027213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.027320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.027417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.027591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.027737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.027883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.027977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.027993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.028920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.028933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6ba0 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.029071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.629 [2024-11-20 10:44:15.029135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.029662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.629 [2024-11-20 10:44:15.031825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.031958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.032008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.032033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.032055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 [2024-11-20 10:44:15.032110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.629 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.629 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.629 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.629 [2024-11-20 10:44:15.041753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.041840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.041881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.041904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.041924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.629 [2024-11-20 10:44:15.041982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3645807 00:27:14.629 [2024-11-20 10:44:15.051774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.051848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.051873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.051888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.051901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 [2024-11-20 10:44:15.051931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.061772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.061834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.061856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.061866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.061875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 [2024-11-20 10:44:15.061895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.071807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.071866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.071879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.071886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.071893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 [2024-11-20 10:44:15.071908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.081779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.081835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.081849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.081856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.081862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 [2024-11-20 10:44:15.081877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.091787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.629 [2024-11-20 10:44:15.091843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.629 [2024-11-20 10:44:15.091857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.629 [2024-11-20 10:44:15.091863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.629 [2024-11-20 10:44:15.091869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.629 [2024-11-20 10:44:15.091885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.629 qpair failed and we were unable to recover it. 00:27:14.629 [2024-11-20 10:44:15.101828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.101887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.101900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.101910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.101916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.101932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.111876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.111930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.111943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.111954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.111960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.111975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.121936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.122002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.122015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.122022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.122029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.122044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.131961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.132013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.132026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.132033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.132039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.132054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.141942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.142005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.142018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.142024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.142031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.142046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.151985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.152042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.152056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.152063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.152069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.152084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.161994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.162048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.162061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.162068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.162074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.162089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.172031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.172087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.172100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.172107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.172113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.172128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.182069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.182128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.182142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.182149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.182155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.182170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.192070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.192129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.192142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.192149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.192155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.192169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.202061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.202119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.202131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.202139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.202145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.202159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.212139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.212192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.212206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.212212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.212218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.212233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.630 [2024-11-20 10:44:15.222158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.630 [2024-11-20 10:44:15.222213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.630 [2024-11-20 10:44:15.222226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.630 [2024-11-20 10:44:15.222233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.630 [2024-11-20 10:44:15.222239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.630 [2024-11-20 10:44:15.222253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.630 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.232199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.232254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.232267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.232278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.232283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.232298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.242222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.242277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.242305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.242312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.242318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.242339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.252253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.252304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.252318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.252326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.252332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.252347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.262279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.262335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.262348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.262355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.262361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.262376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.272361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.272416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.272429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.272436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.272442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.272460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.282343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.282393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.282406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.282413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.282419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.282433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.292367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.292416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.292429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.292435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.292441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.292456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.302422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.302482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.302495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.302502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.302508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.302523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.312432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.312506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.312518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.312525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.312531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.312545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.631 [2024-11-20 10:44:15.322448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.631 [2024-11-20 10:44:15.322504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.631 [2024-11-20 10:44:15.322517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.631 [2024-11-20 10:44:15.322523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.631 [2024-11-20 10:44:15.322529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.631 [2024-11-20 10:44:15.322544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.631 qpair failed and we were unable to recover it. 00:27:14.891 [2024-11-20 10:44:15.332440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.891 [2024-11-20 10:44:15.332497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.891 [2024-11-20 10:44:15.332511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.891 [2024-11-20 10:44:15.332518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.891 [2024-11-20 10:44:15.332524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.891 [2024-11-20 10:44:15.332539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.891 qpair failed and we were unable to recover it. 00:27:14.891 [2024-11-20 10:44:15.342514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.891 [2024-11-20 10:44:15.342570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.891 [2024-11-20 10:44:15.342583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.891 [2024-11-20 10:44:15.342589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.891 [2024-11-20 10:44:15.342596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.891 [2024-11-20 10:44:15.342610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.891 qpair failed and we were unable to recover it. 00:27:14.891 [2024-11-20 10:44:15.352564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.891 [2024-11-20 10:44:15.352637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.891 [2024-11-20 10:44:15.352650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.891 [2024-11-20 10:44:15.352657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.891 [2024-11-20 10:44:15.352663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.891 [2024-11-20 10:44:15.352678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.891 qpair failed and we were unable to recover it. 00:27:14.891 [2024-11-20 10:44:15.362576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.891 [2024-11-20 10:44:15.362630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.891 [2024-11-20 10:44:15.362646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.891 [2024-11-20 10:44:15.362653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.891 [2024-11-20 10:44:15.362658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.891 [2024-11-20 10:44:15.362673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.891 qpair failed and we were unable to recover it. 00:27:14.891 [2024-11-20 10:44:15.372584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.891 [2024-11-20 10:44:15.372636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.891 [2024-11-20 10:44:15.372649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.891 [2024-11-20 10:44:15.372656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.891 [2024-11-20 10:44:15.372662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.891 [2024-11-20 10:44:15.372676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.891 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.382628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.382689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.382702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.382709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.382715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.382730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.392581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.392635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.392648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.392655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.392661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.392678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.402655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.402709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.402722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.402729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.402738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.402753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.412721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.412776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.412789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.412795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.412801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.412816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.422734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.422797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.422810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.422816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.422822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.422836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.432802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.432864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.432878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.432885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.432891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.432905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.442724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.442777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.442795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.442801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.442808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.442822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.452813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.452868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.452881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.452888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.452894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.452909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.462886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.462986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.463001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.463008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.463014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.463029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.472873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.472926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.472939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.472945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.472955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.472970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.482897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.482945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.482961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.482968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.482974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.482988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.492933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.492989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.493005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.493011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.493017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.493032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.892 qpair failed and we were unable to recover it. 00:27:14.892 [2024-11-20 10:44:15.502919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.892 [2024-11-20 10:44:15.502996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.892 [2024-11-20 10:44:15.503009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.892 [2024-11-20 10:44:15.503016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.892 [2024-11-20 10:44:15.503022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.892 [2024-11-20 10:44:15.503035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.513000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.513060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.513073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.513080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.513085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.513100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.523021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.523076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.523089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.523097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.523103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.523117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.533083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.533146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.533160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.533167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.533176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.533190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.543083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.543146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.543158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.543165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.543171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.543185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.553125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.553181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.553193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.553200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.553206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.553220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.563145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.563205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.563217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.563224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.563230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.563245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.573181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.573237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.573250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.573256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.573262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.573277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.583201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.583254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.583268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.583275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.583280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.583295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.593234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.593289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.593302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.593309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.593315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.593329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.603248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.603299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.603312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.603319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.603325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.603339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:14.893 [2024-11-20 10:44:15.613291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.893 [2024-11-20 10:44:15.613350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.893 [2024-11-20 10:44:15.613363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.893 [2024-11-20 10:44:15.613370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.893 [2024-11-20 10:44:15.613376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:14.893 [2024-11-20 10:44:15.613391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.893 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 10:44:15.623351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.155 [2024-11-20 10:44:15.623408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.155 [2024-11-20 10:44:15.623424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.155 [2024-11-20 10:44:15.623431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.155 [2024-11-20 10:44:15.623437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.155 [2024-11-20 10:44:15.623451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 10:44:15.633337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.155 [2024-11-20 10:44:15.633394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.155 [2024-11-20 10:44:15.633406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.155 [2024-11-20 10:44:15.633413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.155 [2024-11-20 10:44:15.633418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.155 [2024-11-20 10:44:15.633434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 10:44:15.643301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.155 [2024-11-20 10:44:15.643356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.155 [2024-11-20 10:44:15.643369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.155 [2024-11-20 10:44:15.643375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.155 [2024-11-20 10:44:15.643381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.155 [2024-11-20 10:44:15.643395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 10:44:15.653390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.155 [2024-11-20 10:44:15.653450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.155 [2024-11-20 10:44:15.653463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.155 [2024-11-20 10:44:15.653470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.653476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.653491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.663434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.663538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.663551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.663561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.663567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.663582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.673484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.673540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.673555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.673562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.673568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.673583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.683510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.683562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.683574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.683580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.683587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.683601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.693516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.693568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.693581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.693588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.693594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.693608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.703573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.703629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.703643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.703649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.703655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.703670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.713590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.713653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.713666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.713673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.713679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.713693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.723584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.723637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.723651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.723658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.723664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.723679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.733618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.733672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.733684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.733691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.733697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.733712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.743653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.743713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.743726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.743733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.743739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.743754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.753653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.753716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.753729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.753736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.753742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.753757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.763710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.763795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.763807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.763814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.763820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.763834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 10:44:15.773724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.156 [2024-11-20 10:44:15.773808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.156 [2024-11-20 10:44:15.773822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.156 [2024-11-20 10:44:15.773828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.156 [2024-11-20 10:44:15.773834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.156 [2024-11-20 10:44:15.773848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.783749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.783807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.783820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.783827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.783833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.783847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.793761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.793823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.793836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.793846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.793852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.793868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.803788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.803839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.803853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.803859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.803865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.803880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.813825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.813877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.813890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.813897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.813902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.813917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.823874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.823928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.823941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.823952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.823958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.823973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.833987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.834053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.834067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.834074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.834080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.834102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.844022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.844078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.844091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.844099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.844105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.844120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.853995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.854052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.854065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.854072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.854078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.854093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.864017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.864074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.864088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.864094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.864101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.864116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 10:44:15.874060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.157 [2024-11-20 10:44:15.874130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.157 [2024-11-20 10:44:15.874143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.157 [2024-11-20 10:44:15.874150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.157 [2024-11-20 10:44:15.874156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.157 [2024-11-20 10:44:15.874171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.884043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.884096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.418 [2024-11-20 10:44:15.884110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.418 [2024-11-20 10:44:15.884117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.418 [2024-11-20 10:44:15.884123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.418 [2024-11-20 10:44:15.884137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.894067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.894124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.418 [2024-11-20 10:44:15.894137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.418 [2024-11-20 10:44:15.894144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.418 [2024-11-20 10:44:15.894150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.418 [2024-11-20 10:44:15.894164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.904096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.904154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.418 [2024-11-20 10:44:15.904167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.418 [2024-11-20 10:44:15.904173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.418 [2024-11-20 10:44:15.904179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.418 [2024-11-20 10:44:15.904194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.914147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.914223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.418 [2024-11-20 10:44:15.914236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.418 [2024-11-20 10:44:15.914243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.418 [2024-11-20 10:44:15.914248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.418 [2024-11-20 10:44:15.914263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.924147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.924199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.418 [2024-11-20 10:44:15.924215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.418 [2024-11-20 10:44:15.924222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.418 [2024-11-20 10:44:15.924228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.418 [2024-11-20 10:44:15.924242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.934187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.934239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.418 [2024-11-20 10:44:15.934252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.418 [2024-11-20 10:44:15.934259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.418 [2024-11-20 10:44:15.934265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.418 [2024-11-20 10:44:15.934280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:44:15.944229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.418 [2024-11-20 10:44:15.944286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:15.944299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:15.944305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:15.944311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:15.944325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:15.954257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:15.954312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:15.954324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:15.954331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:15.954337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:15.954352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:15.964279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:15.964331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:15.964344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:15.964350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:15.964359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:15.964374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:15.974303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:15.974358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:15.974371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:15.974378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:15.974384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:15.974398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:15.984367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:15.984423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:15.984436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:15.984443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:15.984449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:15.984463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:15.994362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:15.994417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:15.994429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:15.994436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:15.994442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:15.994457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:16.004412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:16.004479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:16.004493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:16.004499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:16.004505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:16.004520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:16.014457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:16.014513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:16.014526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:16.014533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:16.014539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:16.014554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:16.024471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:16.024530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:16.024543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:16.024550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:16.024556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:16.024571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:16.034402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:16.034461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:16.034477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:16.034484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:16.034490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:16.034506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:16.044510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:16.044585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:16.044598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.419 [2024-11-20 10:44:16.044605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.419 [2024-11-20 10:44:16.044610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.419 [2024-11-20 10:44:16.044624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:44:16.054477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.419 [2024-11-20 10:44:16.054529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.419 [2024-11-20 10:44:16.054546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.054555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.054562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.054578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.064557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.064614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.064626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.064632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.064638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.064652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.074596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.074653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.074666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.074673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.074679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.074694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.084611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.084666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.084678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.084685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.084690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.084705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.094709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.094763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.094775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.094782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.094791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.094805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.104690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.104748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.104760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.104767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.104773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.104788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.114649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.114721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.114734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.114741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.114746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.114761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.124744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.124795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.124809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.124815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.124821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.124836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.134686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.134775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.134789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.134796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.134801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.134816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:44:16.144779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.420 [2024-11-20 10:44:16.144837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.420 [2024-11-20 10:44:16.144850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.420 [2024-11-20 10:44:16.144857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.420 [2024-11-20 10:44:16.144863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.420 [2024-11-20 10:44:16.144878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.680 [2024-11-20 10:44:16.154832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.680 [2024-11-20 10:44:16.154885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.680 [2024-11-20 10:44:16.154899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.680 [2024-11-20 10:44:16.154905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.680 [2024-11-20 10:44:16.154911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.680 [2024-11-20 10:44:16.154926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.680 qpair failed and we were unable to recover it. 00:27:15.680 [2024-11-20 10:44:16.164862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.680 [2024-11-20 10:44:16.164918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.680 [2024-11-20 10:44:16.164931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.680 [2024-11-20 10:44:16.164938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.680 [2024-11-20 10:44:16.164944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.680 [2024-11-20 10:44:16.164964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.680 qpair failed and we were unable to recover it. 00:27:15.680 [2024-11-20 10:44:16.174874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.174929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.174942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.174954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.174961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.174975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.184900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.184997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.185013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.185020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.185026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.185040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.194888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.194951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.194965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.194972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.194977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.194992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.204962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.205013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.205026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.205032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.205038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.205052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.215000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.215056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.215069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.215075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.215081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.215095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.225031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.225088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.225102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.225112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.225118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.225133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.235144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.235206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.235220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.235227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.235232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.235247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.245056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.245109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.245122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.245129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.245135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.245150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.255057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.255113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.255126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.255133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.255139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.255153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.265147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.265201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.265215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.265221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.265227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.265241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.275122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.275178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.275192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.275199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.275205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.275219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.285177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.681 [2024-11-20 10:44:16.285229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.681 [2024-11-20 10:44:16.285242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.681 [2024-11-20 10:44:16.285249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.681 [2024-11-20 10:44:16.285255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.681 [2024-11-20 10:44:16.285270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-11-20 10:44:16.295237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.295293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.295307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.295313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.295319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.295333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.305203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.305258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.305271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.305277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.305283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.305297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.315291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.315349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.315362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.315369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.315375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.315390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.325297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.325350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.325363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.325369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.325375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.325389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.335341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.335408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.335422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.335429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.335435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.335449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.345355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.345412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.345426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.345432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.345439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.345453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.355407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.355513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.355526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.355536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.355542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.355557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.365412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.365467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.365480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.365486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.365492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.365507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.375385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.375435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.375448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.375455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.375460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.375475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.385423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.385480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.385493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.385500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.385505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.385521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.395455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.395507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.395520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.395527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.395533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.395550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-11-20 10:44:16.405519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.682 [2024-11-20 10:44:16.405578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.682 [2024-11-20 10:44:16.405592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.682 [2024-11-20 10:44:16.405598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.682 [2024-11-20 10:44:16.405604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.682 [2024-11-20 10:44:16.405619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.415498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.415551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.415564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.415570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.415576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.942 [2024-11-20 10:44:16.415591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.942 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.425595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.425665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.425678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.425684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.425690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.942 [2024-11-20 10:44:16.425704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.942 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.435549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.435611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.435625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.435632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.435638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.942 [2024-11-20 10:44:16.435654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.942 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.445692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.445781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.445794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.445800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.445806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.942 [2024-11-20 10:44:16.445821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.942 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.455704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.455756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.455769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.455776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.455782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.942 [2024-11-20 10:44:16.455796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.942 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.465701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.465761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.465775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.465783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.465790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.942 [2024-11-20 10:44:16.465806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.942 qpair failed and we were unable to recover it. 00:27:15.942 [2024-11-20 10:44:16.475734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.942 [2024-11-20 10:44:16.475790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.942 [2024-11-20 10:44:16.475803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.942 [2024-11-20 10:44:16.475809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.942 [2024-11-20 10:44:16.475815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.475830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.485766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.485821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.485837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.485843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.485849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.485863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.495789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.495851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.495865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.495872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.495878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.495892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.505875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.505955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.505968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.505975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.505981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.505996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.515852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.515908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.515921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.515927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.515933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.515951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.525859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.525912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.525925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.525932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.525943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.525964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.535929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.535986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.536000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.536007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.536013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.536027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.545941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.546005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.546018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.546024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.546030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.546045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.555967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.556024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.556036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.556043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.556049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.556063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.565995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.566049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.566062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.566068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.566074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.566088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.576013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.576086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.576099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.576106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.576112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.576126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.586042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.586099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.586112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.586119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.586124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.586140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.596082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.596135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.596147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.596154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.596160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.943 [2024-11-20 10:44:16.596174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.943 qpair failed and we were unable to recover it. 00:27:15.943 [2024-11-20 10:44:16.606135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.943 [2024-11-20 10:44:16.606225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.943 [2024-11-20 10:44:16.606239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.943 [2024-11-20 10:44:16.606245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.943 [2024-11-20 10:44:16.606251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.606266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:15.944 [2024-11-20 10:44:16.616130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.944 [2024-11-20 10:44:16.616183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.944 [2024-11-20 10:44:16.616199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.944 [2024-11-20 10:44:16.616206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.944 [2024-11-20 10:44:16.616212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.616226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:15.944 [2024-11-20 10:44:16.626193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.944 [2024-11-20 10:44:16.626274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.944 [2024-11-20 10:44:16.626287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.944 [2024-11-20 10:44:16.626293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.944 [2024-11-20 10:44:16.626299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.626314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:15.944 [2024-11-20 10:44:16.636207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.944 [2024-11-20 10:44:16.636261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.944 [2024-11-20 10:44:16.636274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.944 [2024-11-20 10:44:16.636281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.944 [2024-11-20 10:44:16.636287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.636302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:15.944 [2024-11-20 10:44:16.646287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.944 [2024-11-20 10:44:16.646343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.944 [2024-11-20 10:44:16.646356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.944 [2024-11-20 10:44:16.646362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.944 [2024-11-20 10:44:16.646368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.646383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:15.944 [2024-11-20 10:44:16.656245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.944 [2024-11-20 10:44:16.656328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.944 [2024-11-20 10:44:16.656340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.944 [2024-11-20 10:44:16.656347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.944 [2024-11-20 10:44:16.656356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.656371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:15.944 [2024-11-20 10:44:16.666294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.944 [2024-11-20 10:44:16.666350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.944 [2024-11-20 10:44:16.666363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.944 [2024-11-20 10:44:16.666369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.944 [2024-11-20 10:44:16.666375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:15.944 [2024-11-20 10:44:16.666389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.944 qpair failed and we were unable to recover it. 00:27:16.204 [2024-11-20 10:44:16.676315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.204 [2024-11-20 10:44:16.676369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.204 [2024-11-20 10:44:16.676383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.204 [2024-11-20 10:44:16.676390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.204 [2024-11-20 10:44:16.676396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.204 [2024-11-20 10:44:16.676411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.204 qpair failed and we were unable to recover it. 00:27:16.204 [2024-11-20 10:44:16.686365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.204 [2024-11-20 10:44:16.686428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.204 [2024-11-20 10:44:16.686440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.204 [2024-11-20 10:44:16.686448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.204 [2024-11-20 10:44:16.686453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.204 [2024-11-20 10:44:16.686468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.204 qpair failed and we were unable to recover it. 00:27:16.204 [2024-11-20 10:44:16.696371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.204 [2024-11-20 10:44:16.696427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.204 [2024-11-20 10:44:16.696441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.204 [2024-11-20 10:44:16.696447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.204 [2024-11-20 10:44:16.696453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.204 [2024-11-20 10:44:16.696468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.204 qpair failed and we were unable to recover it. 00:27:16.204 [2024-11-20 10:44:16.706399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.204 [2024-11-20 10:44:16.706454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.204 [2024-11-20 10:44:16.706467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.204 [2024-11-20 10:44:16.706474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.204 [2024-11-20 10:44:16.706480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.204 [2024-11-20 10:44:16.706494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.204 qpair failed and we were unable to recover it. 00:27:16.204 [2024-11-20 10:44:16.716425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.204 [2024-11-20 10:44:16.716479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.204 [2024-11-20 10:44:16.716493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.204 [2024-11-20 10:44:16.716500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.204 [2024-11-20 10:44:16.716506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.204 [2024-11-20 10:44:16.716520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.204 qpair failed and we were unable to recover it. 00:27:16.204 [2024-11-20 10:44:16.726380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.726436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.726450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.726457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.726463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.726478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.736455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.736513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.736527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.736534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.736540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.736555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.746452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.746509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.746525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.746532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.746537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.746552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.756576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.756632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.756645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.756652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.756658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.756673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.766570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.766622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.766635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.766642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.766648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.766662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.776597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.776649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.776662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.776669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.776675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.776689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.786646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.786721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.786733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.786743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.786749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.786763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.796652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.796705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.796718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.796724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.796731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.796745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.806702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.806766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.806779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.806786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.806792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.806807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.816698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.816752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.816765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.816772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.816778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.816793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.826755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.826836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.826848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.826855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.826861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.826879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.836760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.836820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.836834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.836840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.836846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.836861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.846785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.205 [2024-11-20 10:44:16.846861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.205 [2024-11-20 10:44:16.846874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.205 [2024-11-20 10:44:16.846881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.205 [2024-11-20 10:44:16.846887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.205 [2024-11-20 10:44:16.846902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.205 qpair failed and we were unable to recover it. 00:27:16.205 [2024-11-20 10:44:16.856844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.856925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.856937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.856944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.856953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.856968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.866851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.866908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.866920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.866927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.866933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.866951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.876915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.877020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.877033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.877040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.877046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.877061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.886864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.886958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.886971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.886979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.886985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.886999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.896863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.896964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.896977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.896984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.896989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.897004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.906976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.907035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.907048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.907055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.907061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.907076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.916991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.917047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.917060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.917070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.917076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.917092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.206 [2024-11-20 10:44:16.927011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.206 [2024-11-20 10:44:16.927062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.206 [2024-11-20 10:44:16.927075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.206 [2024-11-20 10:44:16.927082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.206 [2024-11-20 10:44:16.927087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.206 [2024-11-20 10:44:16.927102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.206 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.937036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.937089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.937102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.937109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.937115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.937129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.947008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.947066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.947079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.947086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.947092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.947106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.957109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.957164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.957178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.957184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.957191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.957209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.967177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.967231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.967244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.967250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.967256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.967270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.977160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.977211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.977224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.977231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.977237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.977251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.987193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.987249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.987262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.987269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.987274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.987289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:16.997196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:16.997258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:16.997271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:16.997278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:16.997284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:16.997298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.466 [2024-11-20 10:44:17.007248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.466 [2024-11-20 10:44:17.007353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.466 [2024-11-20 10:44:17.007366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.466 [2024-11-20 10:44:17.007373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.466 [2024-11-20 10:44:17.007379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.466 [2024-11-20 10:44:17.007394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.466 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.017271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.017342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.017355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.017362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.017368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.017383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.027326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.027412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.027425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.027431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.027437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.027451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.037342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.037399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.037412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.037419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.037425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.037440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.047364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.047420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.047435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.047443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.047449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.047463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.057396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.057458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.057470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.057477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.057483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.057497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.067426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.067481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.067493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.067500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.067506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.067520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.077451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.077511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.077524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.077531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.077537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.077551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.087402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.087461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.087474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.087481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.087490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.087504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.097523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.097580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.097594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.097601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.097607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.097621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.107487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.467 [2024-11-20 10:44:17.107574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.467 [2024-11-20 10:44:17.107588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.467 [2024-11-20 10:44:17.107595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.467 [2024-11-20 10:44:17.107602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.467 [2024-11-20 10:44:17.107616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.467 qpair failed and we were unable to recover it. 00:27:16.467 [2024-11-20 10:44:17.117560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.117614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.117627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.117634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.117640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.117654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.127594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.127650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.127663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.127670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.127676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.127690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.137642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.137701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.137714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.137721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.137727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.137741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.147634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.147689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.147702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.147709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.147715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.147730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.157680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.157734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.157748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.157754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.157760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.157775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.167710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.167767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.167780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.167787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.167793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.167808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.177729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.177781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.177798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.177805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.177811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.177826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-11-20 10:44:17.187749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-11-20 10:44:17.187806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-11-20 10:44:17.187819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-11-20 10:44:17.187825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-11-20 10:44:17.187831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.468 [2024-11-20 10:44:17.187846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 10:44:17.197790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.727 [2024-11-20 10:44:17.197847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.727 [2024-11-20 10:44:17.197860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.727 [2024-11-20 10:44:17.197866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.727 [2024-11-20 10:44:17.197872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.727 [2024-11-20 10:44:17.197887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 10:44:17.207813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.727 [2024-11-20 10:44:17.207902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.727 [2024-11-20 10:44:17.207915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.727 [2024-11-20 10:44:17.207922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.727 [2024-11-20 10:44:17.207928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.727 [2024-11-20 10:44:17.207943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 10:44:17.217884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.727 [2024-11-20 10:44:17.217935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.727 [2024-11-20 10:44:17.217951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.727 [2024-11-20 10:44:17.217958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.727 [2024-11-20 10:44:17.217969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.727 [2024-11-20 10:44:17.217985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 10:44:17.227816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.727 [2024-11-20 10:44:17.227874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.727 [2024-11-20 10:44:17.227887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.727 [2024-11-20 10:44:17.227894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.727 [2024-11-20 10:44:17.227900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.727 [2024-11-20 10:44:17.227915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 10:44:17.237910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.237994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.238007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.238014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.238020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.238035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.247937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.248003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.248016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.248023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.248029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.248043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.258005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.258063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.258075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.258082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.258088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.258103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.267987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.268042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.268055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.268062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.268068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.268083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.278014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.278071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.278084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.278090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.278096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.278110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.287968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.288020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.288033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.288040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.288046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.288061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.298059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.298108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.298121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.298128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.298133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.298148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.308100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.308200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.308217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.308223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.308229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.308244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.318129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.318187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.318200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.318207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.318213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.318227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.328157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.328213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.328226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.328233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.328239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.328254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.338192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.338245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.338258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.338265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.338271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.338286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.348227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.348288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.348301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.348312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.348318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.728 [2024-11-20 10:44:17.348333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 10:44:17.358245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.728 [2024-11-20 10:44:17.358299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.728 [2024-11-20 10:44:17.358312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.728 [2024-11-20 10:44:17.358319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.728 [2024-11-20 10:44:17.358325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.358340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.368272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.368324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.368338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.368344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.368350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.368365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.378298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.378352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.378365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.378371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.378377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.378391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.388332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.388385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.388398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.388405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.388411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.388428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.398361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.398427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.398440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.398447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.398453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.398467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.408359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.408410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.408423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.408429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.408435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.408450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.418387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.418440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.418453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.418460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.418466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.418481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.428491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.428550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.428563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.428569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.428575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.428589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.438476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.438580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.438594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.438601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.438607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.438621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 10:44:17.448431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.729 [2024-11-20 10:44:17.448485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.729 [2024-11-20 10:44:17.448498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.729 [2024-11-20 10:44:17.448505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.729 [2024-11-20 10:44:17.448511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.729 [2024-11-20 10:44:17.448525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.458559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.458621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.458635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.458643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.458651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.458666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.468567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.468626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.468640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.468647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.468653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.468670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.478527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.478589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.478602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.478612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.478620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.478635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.488545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.488629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.488642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.488648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.488654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.488669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.498669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.498733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.498746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.498753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.498759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.498773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.508593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.508647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.508660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.508667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.508673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.508688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.518677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.518731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.518744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.518751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.518757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.518775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.528727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.528830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.528843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.528850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.528855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.528870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.538784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.538836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.538849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.538856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.538862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.538876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.548698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.548755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.548769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.548775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.548781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.548796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.558727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.558782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.558795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.558802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.558808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.558822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 10:44:17.568760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.989 [2024-11-20 10:44:17.568815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.989 [2024-11-20 10:44:17.568828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.989 [2024-11-20 10:44:17.568835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.989 [2024-11-20 10:44:17.568840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.989 [2024-11-20 10:44:17.568855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.578869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.578943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.578962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.578969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.578975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.578989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.588856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.588939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.588956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.588963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.588969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.588983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.598885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.598953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.598967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.598974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.598979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.598994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.608932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.608990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.609007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.609014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.609020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.609034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.618935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.618995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.619008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.619015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.619021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.619036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.629025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.629085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.629099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.629106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.629112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.629127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.638973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.639033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.639047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.639054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.639060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.639075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.649091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.649150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.649164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.649171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.649181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.649196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.659015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.659073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.659086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.659093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.659099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.659114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.669132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.669190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.669203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.669210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.669216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.669231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 10:44:17.679129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.990 [2024-11-20 10:44:17.679187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.990 [2024-11-20 10:44:17.679200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.990 [2024-11-20 10:44:17.679207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.990 [2024-11-20 10:44:17.679213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.990 [2024-11-20 10:44:17.679227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 10:44:17.689211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.991 [2024-11-20 10:44:17.689264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.991 [2024-11-20 10:44:17.689277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.991 [2024-11-20 10:44:17.689283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.991 [2024-11-20 10:44:17.689289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.991 [2024-11-20 10:44:17.689303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 10:44:17.699140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.991 [2024-11-20 10:44:17.699191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.991 [2024-11-20 10:44:17.699204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.991 [2024-11-20 10:44:17.699211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.991 [2024-11-20 10:44:17.699216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.991 [2024-11-20 10:44:17.699231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 10:44:17.709228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.991 [2024-11-20 10:44:17.709286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.991 [2024-11-20 10:44:17.709300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.991 [2024-11-20 10:44:17.709307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.991 [2024-11-20 10:44:17.709313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:16.991 [2024-11-20 10:44:17.709327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.991 qpair failed and we were unable to recover it. 00:27:17.250 [2024-11-20 10:44:17.719260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.250 [2024-11-20 10:44:17.719317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.250 [2024-11-20 10:44:17.719330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.250 [2024-11-20 10:44:17.719337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.250 [2024-11-20 10:44:17.719343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.250 [2024-11-20 10:44:17.719357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.250 qpair failed and we were unable to recover it. 00:27:17.250 [2024-11-20 10:44:17.729302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.250 [2024-11-20 10:44:17.729353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.250 [2024-11-20 10:44:17.729368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.250 [2024-11-20 10:44:17.729375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.250 [2024-11-20 10:44:17.729381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.250 [2024-11-20 10:44:17.729396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.250 qpair failed and we were unable to recover it. 00:27:17.250 [2024-11-20 10:44:17.739318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.250 [2024-11-20 10:44:17.739371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.250 [2024-11-20 10:44:17.739387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.250 [2024-11-20 10:44:17.739395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.250 [2024-11-20 10:44:17.739400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.250 [2024-11-20 10:44:17.739416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.250 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.749358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.749416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.749430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.749437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.749443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.749457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.759314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.759375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.759389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.759396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.759402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.759416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.769388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.769441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.769454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.769460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.769466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.769481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.779405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.779454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.779467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.779474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.779483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.779497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.789441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.789498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.789511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.789518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.789523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.789538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.799416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.799470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.799483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.799489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.799495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.799509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.809553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.809606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.809619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.809625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.809631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.809645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.819487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.819571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.819584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.819591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.819596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.819611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.829561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.829618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.829632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.829638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.829644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.829659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.839687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.839751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.839764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.839770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.839776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.839791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.849598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.849651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.849664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.849671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.849677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.849691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.859723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.859776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.859789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.859795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.859801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.859816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.869714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.251 [2024-11-20 10:44:17.869769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.251 [2024-11-20 10:44:17.869786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.251 [2024-11-20 10:44:17.869792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.251 [2024-11-20 10:44:17.869798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.251 [2024-11-20 10:44:17.869813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.251 qpair failed and we were unable to recover it. 00:27:17.251 [2024-11-20 10:44:17.879638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.879694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.879707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.879714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.879719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.879734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.889733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.889817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.889830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.889836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.889842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.889856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.899778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.899865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.899878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.899885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.899891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.899906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.909796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.909851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.909865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.909877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.909883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.909899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.919828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.919901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.919914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.919921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.919927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.919942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.929858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.929909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.929922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.929929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.929935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.929953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.939879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.939956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.939970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.939976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.939982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.939997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.949918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.949999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.950012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.950019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.950024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.950043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.959958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.960014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.960027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.960034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.960040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.960055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.252 [2024-11-20 10:44:17.969974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.252 [2024-11-20 10:44:17.970024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.252 [2024-11-20 10:44:17.970037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.252 [2024-11-20 10:44:17.970044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.252 [2024-11-20 10:44:17.970050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.252 [2024-11-20 10:44:17.970065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.252 qpair failed and we were unable to recover it. 00:27:17.515 [2024-11-20 10:44:17.979991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.515 [2024-11-20 10:44:17.980048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.515 [2024-11-20 10:44:17.980061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.515 [2024-11-20 10:44:17.980068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.515 [2024-11-20 10:44:17.980074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.515 [2024-11-20 10:44:17.980088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.515 qpair failed and we were unable to recover it. 00:27:17.515 [2024-11-20 10:44:17.990039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.515 [2024-11-20 10:44:17.990098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.515 [2024-11-20 10:44:17.990111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.515 [2024-11-20 10:44:17.990118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.515 [2024-11-20 10:44:17.990124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.515 [2024-11-20 10:44:17.990138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.515 qpair failed and we were unable to recover it. 00:27:17.515 [2024-11-20 10:44:18.000128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.515 [2024-11-20 10:44:18.000211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.515 [2024-11-20 10:44:18.000225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.515 [2024-11-20 10:44:18.000232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.515 [2024-11-20 10:44:18.000239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.000252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.010105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.010161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.010175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.010182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.010188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.010203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.020112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.020166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.020179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.020186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.020191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.020206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.030150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.030209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.030223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.030230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.030236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.030251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.040156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.040212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.040225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.040235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.040241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.040256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.050288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.050340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.050353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.050359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.050365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.050379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.060230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.060278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.060291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.060298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.060304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.060318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.070201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.070258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.070271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.070277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.070284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.070298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.080290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.080342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.080355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.080361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.080367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.080385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.090367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.090431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.090443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.090450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.090455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.090470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.100338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.100388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.100402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.100408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.100414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.100429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.110384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.110440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.110454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.110460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.110466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.110480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.120461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.120521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.516 [2024-11-20 10:44:18.120534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.516 [2024-11-20 10:44:18.120540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.516 [2024-11-20 10:44:18.120546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.516 [2024-11-20 10:44:18.120560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.516 qpair failed and we were unable to recover it. 00:27:17.516 [2024-11-20 10:44:18.130426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.516 [2024-11-20 10:44:18.130503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.130516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.130523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.130529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.130544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.140448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.140506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.140519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.140526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.140531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.140546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.150512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.150585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.150597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.150604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.150610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.150624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.160551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.160612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.160626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.160632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.160638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.160653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.170547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.170623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.170639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.170645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.170651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.170666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.180573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.180624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.180638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.180644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.180651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.180665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.190606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.190665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.190678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.190685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.190691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.190706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.200676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.200736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.200749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.200756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.200761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.200776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.210674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.210725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.210738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.210745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.210755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.210769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.220695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.220750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.220763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.220770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.220776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.220790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.230739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.230795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.230809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.230816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.230822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.230836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-11-20 10:44:18.240797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-11-20 10:44:18.240853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-11-20 10:44:18.240866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-11-20 10:44:18.240873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-11-20 10:44:18.240879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.517 [2024-11-20 10:44:18.240893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.250803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.250859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.250874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.250881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.250887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.250903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.260832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.260888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.260902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.260909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.260916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.260931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.270869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.270984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.270999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.271006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.271012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.271028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.280920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.281027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.281040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.281047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.281054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.281069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.290914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.290971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.290984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.290992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.290998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.291013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.300928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.300989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.301005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.301012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.301018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.301033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.310968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.311025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.311039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.311045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.311052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.311067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.320966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.321025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.321038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.321045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.321051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.321065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.331023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.331105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.331119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.331126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.331131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.331146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.341040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.341122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.341135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.341142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.341151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.341165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.351076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.351134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.351147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.351153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.351159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.351174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.828 [2024-11-20 10:44:18.361112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.828 [2024-11-20 10:44:18.361168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.828 [2024-11-20 10:44:18.361181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.828 [2024-11-20 10:44:18.361187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.828 [2024-11-20 10:44:18.361193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.828 [2024-11-20 10:44:18.361207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.828 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.371141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.371200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.371213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.371220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.371225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.371240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.381159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.381211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.381225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.381232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.381237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.381252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.391216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.391275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.391288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.391296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.391302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.391316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.401239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.401306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.401320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.401326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.401332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.401347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.411262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.411311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.411325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.411331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.411337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.411352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.421260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.421313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.421326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.421332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.421338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.421353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.431313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.431373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.431387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.431393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.431399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.431414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.441328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.441382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.441395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.441402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.441408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.441422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.451344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.451397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.451410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.451417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.451423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.451437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.461367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.461421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.461434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.461441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.461447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.461462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.471405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.471460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.471474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.471485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.471491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.471506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.481438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.481491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.481503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.481510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.481516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.481530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.829 [2024-11-20 10:44:18.491473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.829 [2024-11-20 10:44:18.491526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.829 [2024-11-20 10:44:18.491539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.829 [2024-11-20 10:44:18.491545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.829 [2024-11-20 10:44:18.491551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.829 [2024-11-20 10:44:18.491566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.829 qpair failed and we were unable to recover it. 00:27:17.830 [2024-11-20 10:44:18.501467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.830 [2024-11-20 10:44:18.501553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.830 [2024-11-20 10:44:18.501565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.830 [2024-11-20 10:44:18.501572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.830 [2024-11-20 10:44:18.501578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.830 [2024-11-20 10:44:18.501591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.830 qpair failed and we were unable to recover it. 00:27:17.830 [2024-11-20 10:44:18.511521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.830 [2024-11-20 10:44:18.511580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.830 [2024-11-20 10:44:18.511593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.830 [2024-11-20 10:44:18.511600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.830 [2024-11-20 10:44:18.511606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.830 [2024-11-20 10:44:18.511624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.830 qpair failed and we were unable to recover it. 00:27:17.830 [2024-11-20 10:44:18.521569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.830 [2024-11-20 10:44:18.521626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.830 [2024-11-20 10:44:18.521639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.830 [2024-11-20 10:44:18.521646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.830 [2024-11-20 10:44:18.521651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.830 [2024-11-20 10:44:18.521665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.830 qpair failed and we were unable to recover it. 00:27:17.830 [2024-11-20 10:44:18.531605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.830 [2024-11-20 10:44:18.531657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.830 [2024-11-20 10:44:18.531670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.830 [2024-11-20 10:44:18.531677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.830 [2024-11-20 10:44:18.531683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.830 [2024-11-20 10:44:18.531697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.830 qpair failed and we were unable to recover it. 00:27:17.830 [2024-11-20 10:44:18.541601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.830 [2024-11-20 10:44:18.541651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.830 [2024-11-20 10:44:18.541664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.830 [2024-11-20 10:44:18.541671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.830 [2024-11-20 10:44:18.541677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:17.830 [2024-11-20 10:44:18.541692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.830 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.551633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.551690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.551704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.551710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.551716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.551731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.561676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.561744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.561757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.561763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.561770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.561784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.571694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.571745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.571759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.571765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.571771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.571786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.581711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.581763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.581776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.581783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.581789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.581804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.591755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.591811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.591824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.591831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.591837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.591851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.601785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.601844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.601857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.601867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.601873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.601888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.611800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.611881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.611895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.611901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.611907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.611922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.621868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.621917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.621930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.621936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.621943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.621961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.631866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.631924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.631937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.631944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.631954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.631969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.641881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.641945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.641961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.641968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.641973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.641992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.651907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.651998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.112 [2024-11-20 10:44:18.652012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.112 [2024-11-20 10:44:18.652018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.112 [2024-11-20 10:44:18.652024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.112 [2024-11-20 10:44:18.652039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.112 qpair failed and we were unable to recover it. 00:27:18.112 [2024-11-20 10:44:18.661937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.112 [2024-11-20 10:44:18.661992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.662006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.662013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.662019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.662034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.671991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.672055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.672068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.672075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.672081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.672095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.682006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.682064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.682078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.682085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.682092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.682107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.692022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.692074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.692088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.692094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.692100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.692115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.702081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.702186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.702199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.702206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.702212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.702227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.712129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.712199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.712212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.712219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.712224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.712239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.722114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.722171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.722184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.722190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.722196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.722210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.732132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.732184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.732205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.732212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.732218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.732232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.742173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.742227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.742240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.742247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.742253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.742267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.752228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.752309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.752322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.752329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.752335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.752349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.762239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.762296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.762309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.762315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.762322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.762336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.772256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.772312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.772325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.772332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.772341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.772356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.782284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.782334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.782346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.113 [2024-11-20 10:44:18.782353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.113 [2024-11-20 10:44:18.782359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.113 [2024-11-20 10:44:18.782374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.113 qpair failed and we were unable to recover it. 00:27:18.113 [2024-11-20 10:44:18.792309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.113 [2024-11-20 10:44:18.792366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.113 [2024-11-20 10:44:18.792379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.114 [2024-11-20 10:44:18.792386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.114 [2024-11-20 10:44:18.792392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.114 [2024-11-20 10:44:18.792407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.114 qpair failed and we were unable to recover it. 00:27:18.114 [2024-11-20 10:44:18.802372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.114 [2024-11-20 10:44:18.802431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.114 [2024-11-20 10:44:18.802443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.114 [2024-11-20 10:44:18.802450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.114 [2024-11-20 10:44:18.802456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.114 [2024-11-20 10:44:18.802470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.114 qpair failed and we were unable to recover it. 00:27:18.114 [2024-11-20 10:44:18.812365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.114 [2024-11-20 10:44:18.812418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.114 [2024-11-20 10:44:18.812431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.114 [2024-11-20 10:44:18.812438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.114 [2024-11-20 10:44:18.812443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.114 [2024-11-20 10:44:18.812458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.114 qpair failed and we were unable to recover it. 00:27:18.114 [2024-11-20 10:44:18.822392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.114 [2024-11-20 10:44:18.822442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.114 [2024-11-20 10:44:18.822455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.114 [2024-11-20 10:44:18.822461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.114 [2024-11-20 10:44:18.822467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.114 [2024-11-20 10:44:18.822482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.114 qpair failed and we were unable to recover it. 00:27:18.114 [2024-11-20 10:44:18.832454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.114 [2024-11-20 10:44:18.832510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.114 [2024-11-20 10:44:18.832523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.114 [2024-11-20 10:44:18.832529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.114 [2024-11-20 10:44:18.832535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.114 [2024-11-20 10:44:18.832550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.114 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.842456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.842512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.842525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.842531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.842537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.842552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.852488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.852545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.852558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.852565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.852571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.852586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.862509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.862561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.862577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.862584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.862590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.862604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.872491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.872545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.872558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.872565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.872570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.872585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.882577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.882641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.882654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.882661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.882667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.882681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.892603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.892683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.892697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.892703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.892709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.892724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.902562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.374 [2024-11-20 10:44:18.902624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.374 [2024-11-20 10:44:18.902637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.374 [2024-11-20 10:44:18.902644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.374 [2024-11-20 10:44:18.902654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.374 [2024-11-20 10:44:18.902668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-11-20 10:44:18.912711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.912764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.912777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.912783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.912789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.912804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.922684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.922736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.922749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.922756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.922762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.922777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.932715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.932767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.932780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.932787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.932793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.932807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.942688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.942771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.942784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.942791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.942797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.942812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.952703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.952757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.952771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.952777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.952783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.952798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.962811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.962866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.962880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.962886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.962892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.962906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.972831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.972910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.972924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.972930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.972936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.972955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.982869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.982927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.982940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.982950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.982957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.982972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:18.992875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:18.992934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:18.992951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:18.992958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:18.992964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:18.992978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:19.002898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:19.002954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:19.002967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:19.002974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:19.002980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:19.002995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:19.012877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:19.012928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:19.012941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:19.012951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:19.012958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:19.012972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:19.022899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:19.022954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:19.022968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:19.022975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:19.022981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:19.022996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:19.032998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.375 [2024-11-20 10:44:19.033054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.375 [2024-11-20 10:44:19.033067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.375 [2024-11-20 10:44:19.033077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.375 [2024-11-20 10:44:19.033084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.375 [2024-11-20 10:44:19.033098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-11-20 10:44:19.042973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.376 [2024-11-20 10:44:19.043026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.376 [2024-11-20 10:44:19.043039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.376 [2024-11-20 10:44:19.043046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.376 [2024-11-20 10:44:19.043052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.376 [2024-11-20 10:44:19.043067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-11-20 10:44:19.052998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.376 [2024-11-20 10:44:19.053050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.376 [2024-11-20 10:44:19.053063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.376 [2024-11-20 10:44:19.053070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.376 [2024-11-20 10:44:19.053076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.376 [2024-11-20 10:44:19.053091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-11-20 10:44:19.063008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.376 [2024-11-20 10:44:19.063062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.376 [2024-11-20 10:44:19.063076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.376 [2024-11-20 10:44:19.063083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.376 [2024-11-20 10:44:19.063089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.376 [2024-11-20 10:44:19.063104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-11-20 10:44:19.073138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.376 [2024-11-20 10:44:19.073216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.376 [2024-11-20 10:44:19.073229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.376 [2024-11-20 10:44:19.073236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.376 [2024-11-20 10:44:19.073242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.376 [2024-11-20 10:44:19.073261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-11-20 10:44:19.083081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.376 [2024-11-20 10:44:19.083137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.376 [2024-11-20 10:44:19.083151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.376 [2024-11-20 10:44:19.083157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.376 [2024-11-20 10:44:19.083163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.376 [2024-11-20 10:44:19.083178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-11-20 10:44:19.093116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.376 [2024-11-20 10:44:19.093203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.376 [2024-11-20 10:44:19.093216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.376 [2024-11-20 10:44:19.093223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.376 [2024-11-20 10:44:19.093229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.376 [2024-11-20 10:44:19.093244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.636 [2024-11-20 10:44:19.103115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.636 [2024-11-20 10:44:19.103171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.636 [2024-11-20 10:44:19.103183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.636 [2024-11-20 10:44:19.103190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.636 [2024-11-20 10:44:19.103197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.636 [2024-11-20 10:44:19.103212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.636 qpair failed and we were unable to recover it. 00:27:18.636 [2024-11-20 10:44:19.113207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.636 [2024-11-20 10:44:19.113261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.636 [2024-11-20 10:44:19.113274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.636 [2024-11-20 10:44:19.113281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.636 [2024-11-20 10:44:19.113287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.636 [2024-11-20 10:44:19.113301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.636 qpair failed and we were unable to recover it. 00:27:18.636 [2024-11-20 10:44:19.123182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.636 [2024-11-20 10:44:19.123240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.636 [2024-11-20 10:44:19.123253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.636 [2024-11-20 10:44:19.123259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.636 [2024-11-20 10:44:19.123266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.636 [2024-11-20 10:44:19.123280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.636 qpair failed and we were unable to recover it. 00:27:18.636 [2024-11-20 10:44:19.133288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.133345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.133359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.133365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.133371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.133386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.143342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.143418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.143431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.143438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.143444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.143458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.153275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.153333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.153345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.153352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.153358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.153373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.163412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.163467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.163483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.163490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.163496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.163510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.173362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.173434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.173447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.173454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.173460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.173474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.183421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.183505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.183519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.183526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.183532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.183547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.193459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.193519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.193532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.193539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.193545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.193560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.203529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.203586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.203599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.203606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.203612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.203630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.213515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.213572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.213585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.213592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.213598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.213612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.223483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.223569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.223582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.223589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.223595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.223609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.233506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.233565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.233578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.233585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.233591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.233605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.243584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.243643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.243657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.243663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.243669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.637 [2024-11-20 10:44:19.243684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.637 qpair failed and we were unable to recover it. 00:27:18.637 [2024-11-20 10:44:19.253661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.637 [2024-11-20 10:44:19.253721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.637 [2024-11-20 10:44:19.253734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.637 [2024-11-20 10:44:19.253741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.637 [2024-11-20 10:44:19.253747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.253761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.263642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.263738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.263751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.263758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.263763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.263778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.273679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.273734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.273747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.273753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.273759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.273774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.283713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.283767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.283779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.283786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.283792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.283807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.293765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.293823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.293839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.293846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.293852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.293866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.303745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.303828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.303841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.303848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.303854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.303869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.313824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.313879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.313892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.313899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.313905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.313919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.323807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.323866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.323888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.323896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.323902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.323921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.333832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.333884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.333898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.333904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.333914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.333929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.343850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.343902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.343916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.343923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.343929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.343944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.353897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.353960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.353973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.353980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.353986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.354001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.638 [2024-11-20 10:44:19.363924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.638 [2024-11-20 10:44:19.364007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.638 [2024-11-20 10:44:19.364021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.638 [2024-11-20 10:44:19.364027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.638 [2024-11-20 10:44:19.364033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.638 [2024-11-20 10:44:19.364048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.638 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.373951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.374044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.374057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.374064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.374070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.374084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.383901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.383992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.384006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.384012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.384019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.384033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.394014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.394070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.394083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.394090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.394096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.394111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.404030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.404085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.404098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.404105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.404111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.404125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.414060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.414126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.414146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.414152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.414158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.414173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.424082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.424135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.424154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.424161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.424166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.424181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.434152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.434206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.434219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.434225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.434231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.434246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.444145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.444202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.444215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.444222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.444228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.444243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.454172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.454226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.454239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.454246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.454252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.454266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.464191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.464249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.464262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.464268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.464277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.900 [2024-11-20 10:44:19.464292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.900 qpair failed and we were unable to recover it. 00:27:18.900 [2024-11-20 10:44:19.474229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.900 [2024-11-20 10:44:19.474283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.900 [2024-11-20 10:44:19.474296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.900 [2024-11-20 10:44:19.474302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.900 [2024-11-20 10:44:19.474308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.474322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.484263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.484337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.484349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.484356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.484362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.484377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.494250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.494303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.494316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.494323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.494329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.494344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.504289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.504372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.504386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.504392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.504398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.504413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.514350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.514405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.514418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.514424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.514430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.514445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.524377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.524433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.524446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.524453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.524458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.524473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.534386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.534440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.534453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.534459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.534465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.534479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.544527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.544580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.544594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.544601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.544607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.544621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.554530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.554613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.554626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.554633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.554639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.554653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.564515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.564571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.564584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.564591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.564597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.564613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.574568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.574626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.574639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.574646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.574652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.574667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.584589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.584641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.584654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.584660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.584667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.584681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.594575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.594628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.901 [2024-11-20 10:44:19.594641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.901 [2024-11-20 10:44:19.594651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.901 [2024-11-20 10:44:19.594657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.901 [2024-11-20 10:44:19.594672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.901 qpair failed and we were unable to recover it. 00:27:18.901 [2024-11-20 10:44:19.604600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.901 [2024-11-20 10:44:19.604702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.902 [2024-11-20 10:44:19.604715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.902 [2024-11-20 10:44:19.604721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.902 [2024-11-20 10:44:19.604727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.902 [2024-11-20 10:44:19.604742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.902 qpair failed and we were unable to recover it. 00:27:18.902 [2024-11-20 10:44:19.614616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.902 [2024-11-20 10:44:19.614692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.902 [2024-11-20 10:44:19.614705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.902 [2024-11-20 10:44:19.614711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.902 [2024-11-20 10:44:19.614717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.902 [2024-11-20 10:44:19.614731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.902 qpair failed and we were unable to recover it. 00:27:18.902 [2024-11-20 10:44:19.624662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.902 [2024-11-20 10:44:19.624728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.902 [2024-11-20 10:44:19.624741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.902 [2024-11-20 10:44:19.624747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.902 [2024-11-20 10:44:19.624753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:18.902 [2024-11-20 10:44:19.624767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.902 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.634691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.634744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.634757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.634764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.634770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.634787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.644720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.644779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.644793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.644800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.644806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.644821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.654745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.654797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.654809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.654816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.654822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.654836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.664764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.664828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.664841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.664848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.664854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.664869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.674795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.674854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.674867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.674873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.674879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.674893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.684811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.684875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.684890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.684897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.684903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.684917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.694902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.694975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.694989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.694995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.695001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.695016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.704893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.704945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.704961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.704968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.704974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.704988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.714925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.714986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.714999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.715005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.715011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.715026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.724968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.725030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.725046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.725053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.162 [2024-11-20 10:44:19.725059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.162 [2024-11-20 10:44:19.725073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.162 qpair failed and we were unable to recover it. 00:27:19.162 [2024-11-20 10:44:19.735002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.162 [2024-11-20 10:44:19.735056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.162 [2024-11-20 10:44:19.735070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.162 [2024-11-20 10:44:19.735077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.735083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.735098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.745017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.745067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.745080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.745087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.745093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.745108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.755051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.755109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.755122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.755128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.755134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.755148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.765088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.765143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.765156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.765163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.765169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.765186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.775115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.775182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.775195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.775201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.775207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.775221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.785144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.785192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.785205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.785211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.785217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.785231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.795205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.795260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.795273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.795280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.795286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.795301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.805205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.805261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.805274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.805281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.805287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.805301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.815250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.815301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.815314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.815320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.815327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.815340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.825264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.825321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.825333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.825339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.825345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.825359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.835306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.835383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.835396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.835403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.835408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.835423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.845315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.845372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.845385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.845392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.845397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.845412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.855347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.855402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.855417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.163 [2024-11-20 10:44:19.855425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.163 [2024-11-20 10:44:19.855431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.163 [2024-11-20 10:44:19.855446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.163 qpair failed and we were unable to recover it. 00:27:19.163 [2024-11-20 10:44:19.865370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.163 [2024-11-20 10:44:19.865428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.163 [2024-11-20 10:44:19.865440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.164 [2024-11-20 10:44:19.865448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.164 [2024-11-20 10:44:19.865453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.164 [2024-11-20 10:44:19.865468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.164 qpair failed and we were unable to recover it. 00:27:19.164 [2024-11-20 10:44:19.875417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.164 [2024-11-20 10:44:19.875517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.164 [2024-11-20 10:44:19.875530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.164 [2024-11-20 10:44:19.875537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.164 [2024-11-20 10:44:19.875543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.164 [2024-11-20 10:44:19.875558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.164 qpair failed and we were unable to recover it. 00:27:19.164 [2024-11-20 10:44:19.885418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.164 [2024-11-20 10:44:19.885473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.164 [2024-11-20 10:44:19.885485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.164 [2024-11-20 10:44:19.885491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.164 [2024-11-20 10:44:19.885498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.164 [2024-11-20 10:44:19.885512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.164 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.895470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.895560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.895574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.895580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.895589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.895604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.905416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.905492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.905504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.905512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.905517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.905533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.915518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.915593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.915606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.915612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.915618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.915634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.925561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.925615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.925629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.925636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.925641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.925656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.935588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.935645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.935658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.935665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.935671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.935685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.945602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.945653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.945666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.945672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.945678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.945692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.422 [2024-11-20 10:44:19.955647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.422 [2024-11-20 10:44:19.955703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.422 [2024-11-20 10:44:19.955716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.422 [2024-11-20 10:44:19.955723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.422 [2024-11-20 10:44:19.955729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.422 [2024-11-20 10:44:19.955743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.422 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:19.965664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:19.965727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:19.965740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:19.965747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:19.965753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:19.965768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:19.975743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:19.975798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:19.975811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:19.975818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:19.975825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:19.975841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:19.985752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:19.985808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:19.985824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:19.985831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:19.985837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:19.985851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:19.995749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:19.995833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:19.995848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:19.995854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:19.995860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:19.995875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.005793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.005846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.005861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.005868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.005874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.005889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.015857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.015921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.015940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.015952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.015958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.015976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.025882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.025939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.025958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.025969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.025975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.025991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.035901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.035966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.035980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.035987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.035993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.036027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.045941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.046008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.046024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.046032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.046038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.046055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.055886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.055974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.055988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.055995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.056001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.056017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.066036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.066136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.066149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.066155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.066161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.066177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.076008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.076069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.076083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.076090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.076096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.076112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.086056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.086133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.086149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.086156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.086162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.086178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.096080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.096136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.096150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.096156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.096162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.096178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.106094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.106149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.106162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.106169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.106175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.106190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.116125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.116183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.116197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.116203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.116210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.116225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.126190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.126246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.126260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.126267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.126273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.126288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.136225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.136287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.136301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.136307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.136313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.136328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.423 [2024-11-20 10:44:20.146196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.423 [2024-11-20 10:44:20.146253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.423 [2024-11-20 10:44:20.146266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.423 [2024-11-20 10:44:20.146273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.423 [2024-11-20 10:44:20.146279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.423 [2024-11-20 10:44:20.146294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.423 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.156245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.156301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.156315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.156324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.156331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.156346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.166274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.166331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.166344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.166351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.166357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.166372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.176305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.176362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.176376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.176383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.176389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.176404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.186334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.186395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.186409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.186416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.186423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.186438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.196284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.196338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.196353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.196360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.196367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.196388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.206385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.206471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.206486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.206493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.206499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.206515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.216398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.216452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.216466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.216473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.216479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.216494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.226482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.226548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.226561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.226568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.226574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.226589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.236448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.236522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.236536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.236543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.236549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.236564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.683 [2024-11-20 10:44:20.246496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.683 [2024-11-20 10:44:20.246555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.683 [2024-11-20 10:44:20.246569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.683 [2024-11-20 10:44:20.246575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.683 [2024-11-20 10:44:20.246581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.683 [2024-11-20 10:44:20.246596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.683 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.256541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.256601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.256614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.256621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.256626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.256641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.266546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.266620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.266633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.266640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.266645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.266660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.276570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.276625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.276639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.276645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.276651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.276666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.286600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.286655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.286671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.286678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.286683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.286698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.296616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.296671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.296684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.296691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.296697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.296711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.306691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.306742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.306756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.306763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.306769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.306784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.316698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.316760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.316773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.316781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.316787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.316802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.326754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.326813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.326826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.326833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.326843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.326858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.336771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.336825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.336839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.336846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.336852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.336866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.346785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.346843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.346856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.346863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.346869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.346884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.356791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.356851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.356865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.356872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.356878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.356893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.366804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.366898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.366912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.366919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.366925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.366940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.376818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.684 [2024-11-20 10:44:20.376875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.684 [2024-11-20 10:44:20.376888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.684 [2024-11-20 10:44:20.376895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.684 [2024-11-20 10:44:20.376901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.684 [2024-11-20 10:44:20.376916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.684 qpair failed and we were unable to recover it. 00:27:19.684 [2024-11-20 10:44:20.386870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.685 [2024-11-20 10:44:20.386926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.685 [2024-11-20 10:44:20.386940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.685 [2024-11-20 10:44:20.386952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.685 [2024-11-20 10:44:20.386958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.685 [2024-11-20 10:44:20.386973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.685 qpair failed and we were unable to recover it. 00:27:19.685 [2024-11-20 10:44:20.396867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.685 [2024-11-20 10:44:20.396957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.685 [2024-11-20 10:44:20.396971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.685 [2024-11-20 10:44:20.396978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.685 [2024-11-20 10:44:20.396984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.685 [2024-11-20 10:44:20.396999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.685 qpair failed and we were unable to recover it. 00:27:19.685 [2024-11-20 10:44:20.406888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.685 [2024-11-20 10:44:20.406944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.685 [2024-11-20 10:44:20.406962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.685 [2024-11-20 10:44:20.406969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.685 [2024-11-20 10:44:20.406975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.685 [2024-11-20 10:44:20.406990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.685 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 10:44:20.416989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 10:44:20.417049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 10:44:20.417066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 10:44:20.417073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 10:44:20.417079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.944 [2024-11-20 10:44:20.417094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 10:44:20.426995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.427053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.427066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.427073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.427079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.427094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.437018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.437074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.437087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.437093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.437099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.437114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.447144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.447208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.447221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.447228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.447233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.447249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.457102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.457203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.457216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.457223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.457232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.457247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.467158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.467211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.467224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.467231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.467237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.467251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.477123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.477178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.477192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.477199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.477205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.477219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.487156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.487240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.487252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.487259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.487265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.487279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.497148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.497201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.497214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.497221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.497227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.497242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.507177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.507234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.507247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.507254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.507260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.507275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.517330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.517405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.517417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.517424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.517430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.517445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.527342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.527398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.527411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.527420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.527426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.527441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.537252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.945 [2024-11-20 10:44:20.537305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.945 [2024-11-20 10:44:20.537318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.945 [2024-11-20 10:44:20.537325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.945 [2024-11-20 10:44:20.537331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.945 [2024-11-20 10:44:20.537345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.945 qpair failed and we were unable to recover it. 00:27:19.945 [2024-11-20 10:44:20.547281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.547342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.547359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.547366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.547372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.547387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.557313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.557368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.557381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.557388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.557393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.557408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.567418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.567520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.567534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.567541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.567547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.567561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.577472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.577528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.577540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.577547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.577553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.577567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.587379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.587432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.587445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.587455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.587461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.587476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.597506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.597562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.597575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.597581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.597587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.597602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.607500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.607552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.607565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.607572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.607578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.607592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.617552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.617606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.617618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.617625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.617631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.617645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.627568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.627664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.627677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.627684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.627689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.627704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.637548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.637603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.637616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.637623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.637629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.637644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.647590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.647647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.647660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.647667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.647672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.647688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.657609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.657660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.946 [2024-11-20 10:44:20.657674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.946 [2024-11-20 10:44:20.657680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.946 [2024-11-20 10:44:20.657686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.946 [2024-11-20 10:44:20.657701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.946 qpair failed and we were unable to recover it. 00:27:19.946 [2024-11-20 10:44:20.667729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.946 [2024-11-20 10:44:20.667793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.947 [2024-11-20 10:44:20.667805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.947 [2024-11-20 10:44:20.667812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.947 [2024-11-20 10:44:20.667818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:19.947 [2024-11-20 10:44:20.667832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.947 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.677731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.677791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.677804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.677811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.677817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.677831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.687752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.687807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.687821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.687828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.687835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.687849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.697787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.697838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.697851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.697857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.697863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.697878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.707802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.707880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.707894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.707901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.707907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.707921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.717772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.717825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.717839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.717849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.717855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.717869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.727868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.727922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.727935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.727942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.727951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.727967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.737962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.738019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.738034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.738041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.738047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.738063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.747941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.747999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.748013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.748020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.748026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.748040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.757967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.758021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.758034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.758041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.758047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.758066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.767996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.768054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.768068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.768074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.768080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.768095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.778016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.778068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.778081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.778087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 10:44:20.778094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.206 [2024-11-20 10:44:20.778108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 10:44:20.788046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 10:44:20.788097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 10:44:20.788110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 10:44:20.788117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.788122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.788137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.798086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.798146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.798159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.798166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.798171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.798186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.808090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.808141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.808154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.808161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.808167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.808181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.818067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.818121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.818134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.818141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.818147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.818162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.828159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.828211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.828224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.828231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.828237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.828251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.838223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.838283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.838296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.838303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.838309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.838324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.848178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.848275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.848291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.848298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.848304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.848318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.858247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.858301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.858314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.858320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.858326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.858341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.868270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.868327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.868340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.868347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.868352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.868366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.878314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.878373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.878386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.878393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.878398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.878413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.888344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.888400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.888412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.888419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.888428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.888443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.898391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.898466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.898479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.898486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.898492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.898506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.908375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.908431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.908444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.207 [2024-11-20 10:44:20.908451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.207 [2024-11-20 10:44:20.908457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.207 [2024-11-20 10:44:20.908472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-11-20 10:44:20.918376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.207 [2024-11-20 10:44:20.918466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.207 [2024-11-20 10:44:20.918479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.208 [2024-11-20 10:44:20.918486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.208 [2024-11-20 10:44:20.918492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.208 [2024-11-20 10:44:20.918506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-11-20 10:44:20.928441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.208 [2024-11-20 10:44:20.928495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.208 [2024-11-20 10:44:20.928507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.208 [2024-11-20 10:44:20.928514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.208 [2024-11-20 10:44:20.928520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.208 [2024-11-20 10:44:20.928534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.938470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.938521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.938534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.938541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.938547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.466 [2024-11-20 10:44:20.938562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.948491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.948546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.948560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.948566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.948573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6424000b90 00:27:20.466 [2024-11-20 10:44:20.948588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.958552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.958691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.958742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.958765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.958785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6420000b90 00:27:20.466 [2024-11-20 10:44:20.958833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.968533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.968599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.968623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.968636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.968647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6420000b90 00:27:20.466 [2024-11-20 10:44:20.968674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.978594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.978706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.978766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.978790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.978810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f642c000b90 00:27:20.466 [2024-11-20 10:44:20.978858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.988602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.988672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.988696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.988710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.988721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f642c000b90 00:27:20.466 [2024-11-20 10:44:20.988749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:20.988918] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:20.466 A controller has encountered a failure and is being reset. 00:27:20.466 [2024-11-20 10:44:20.998786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:20.998882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:20.998935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:20.998970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:20.998988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23a6ba0 00:27:20.466 [2024-11-20 10:44:20.999036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 10:44:21.008685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 10:44:21.008754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 10:44:21.008780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 10:44:21.008793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 10:44:21.008804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23a6ba0 00:27:20.466 [2024-11-20 10:44:21.008832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 Controller properly reset. 00:27:20.466 Initializing NVMe Controllers 00:27:20.466 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:20.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:20.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:20.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:20.466 Initialization complete. Launching workers. 00:27:20.466 Starting thread on core 1 00:27:20.466 Starting thread on core 2 00:27:20.466 Starting thread on core 3 00:27:20.467 Starting thread on core 0 00:27:20.467 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:20.467 00:27:20.467 real 0m10.928s 00:27:20.467 user 0m19.467s 00:27:20.467 sys 0m4.400s 00:27:20.467 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.467 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.467 ************************************ 00:27:20.467 END TEST nvmf_target_disconnect_tc2 00:27:20.467 ************************************ 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:20.725 rmmod nvme_tcp 00:27:20.725 rmmod nvme_fabrics 00:27:20.725 rmmod nvme_keyring 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3646330 ']' 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3646330 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3646330 ']' 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3646330 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3646330 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3646330' 00:27:20.725 killing process with pid 3646330 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3646330 00:27:20.725 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3646330 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.984 10:44:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.888 10:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:22.888 00:27:22.888 real 0m19.632s 00:27:22.888 user 0m47.704s 00:27:22.888 sys 0m9.241s 00:27:22.888 10:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.888 10:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:22.888 ************************************ 00:27:22.888 END TEST nvmf_target_disconnect 00:27:22.888 ************************************ 00:27:22.888 10:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:22.888 00:27:22.888 real 5m53.026s 00:27:22.888 user 10m39.795s 00:27:22.888 sys 1m58.325s 00:27:22.888 10:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.888 10:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.888 ************************************ 00:27:22.888 END TEST nvmf_host 00:27:22.888 ************************************ 00:27:23.146 10:44:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:23.146 10:44:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:23.146 10:44:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:23.146 10:44:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.146 10:44:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.147 10:44:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.147 ************************************ 00:27:23.147 START TEST nvmf_target_core_interrupt_mode 00:27:23.147 ************************************ 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:23.147 * Looking for test storage... 00:27:23.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:23.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.147 --rc genhtml_branch_coverage=1 00:27:23.147 --rc genhtml_function_coverage=1 00:27:23.147 --rc genhtml_legend=1 00:27:23.147 --rc geninfo_all_blocks=1 00:27:23.147 --rc geninfo_unexecuted_blocks=1 00:27:23.147 00:27:23.147 ' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:23.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.147 --rc genhtml_branch_coverage=1 00:27:23.147 --rc genhtml_function_coverage=1 00:27:23.147 --rc genhtml_legend=1 00:27:23.147 --rc geninfo_all_blocks=1 00:27:23.147 --rc geninfo_unexecuted_blocks=1 00:27:23.147 00:27:23.147 ' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:23.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.147 --rc genhtml_branch_coverage=1 00:27:23.147 --rc genhtml_function_coverage=1 00:27:23.147 --rc genhtml_legend=1 00:27:23.147 --rc geninfo_all_blocks=1 00:27:23.147 --rc geninfo_unexecuted_blocks=1 00:27:23.147 00:27:23.147 ' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:23.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.147 --rc genhtml_branch_coverage=1 00:27:23.147 --rc genhtml_function_coverage=1 00:27:23.147 --rc genhtml_legend=1 00:27:23.147 --rc geninfo_all_blocks=1 00:27:23.147 --rc geninfo_unexecuted_blocks=1 00:27:23.147 00:27:23.147 ' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.147 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:23.406 ************************************ 00:27:23.406 START TEST nvmf_abort 00:27:23.406 ************************************ 00:27:23.406 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:23.406 * Looking for test storage... 00:27:23.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.407 --rc genhtml_branch_coverage=1 00:27:23.407 --rc genhtml_function_coverage=1 00:27:23.407 --rc genhtml_legend=1 00:27:23.407 --rc geninfo_all_blocks=1 00:27:23.407 --rc geninfo_unexecuted_blocks=1 00:27:23.407 00:27:23.407 ' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.407 --rc genhtml_branch_coverage=1 00:27:23.407 --rc genhtml_function_coverage=1 00:27:23.407 --rc genhtml_legend=1 00:27:23.407 --rc geninfo_all_blocks=1 00:27:23.407 --rc geninfo_unexecuted_blocks=1 00:27:23.407 00:27:23.407 ' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.407 --rc genhtml_branch_coverage=1 00:27:23.407 --rc genhtml_function_coverage=1 00:27:23.407 --rc genhtml_legend=1 00:27:23.407 --rc geninfo_all_blocks=1 00:27:23.407 --rc geninfo_unexecuted_blocks=1 00:27:23.407 00:27:23.407 ' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.407 --rc genhtml_branch_coverage=1 00:27:23.407 --rc genhtml_function_coverage=1 00:27:23.407 --rc genhtml_legend=1 00:27:23.407 --rc geninfo_all_blocks=1 00:27:23.407 --rc geninfo_unexecuted_blocks=1 00:27:23.407 00:27:23.407 ' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.407 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.408 10:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.986 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:29.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:29.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:29.987 Found net devices under 0000:86:00.0: cvl_0_0 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:29.987 Found net devices under 0000:86:00.1: cvl_0_1 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.987 10:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:29.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:27:29.987 00:27:29.987 --- 10.0.0.2 ping statistics --- 00:27:29.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.987 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:27:29.987 00:27:29.987 --- 10.0.0.1 ping statistics --- 00:27:29.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.987 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3651054 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:29.987 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3651054 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3651054 ']' 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 [2024-11-20 10:44:30.115992] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:29.988 [2024-11-20 10:44:30.116935] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:27:29.988 [2024-11-20 10:44:30.116977] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.988 [2024-11-20 10:44:30.187642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:29.988 [2024-11-20 10:44:30.228504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.988 [2024-11-20 10:44:30.228539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.988 [2024-11-20 10:44:30.228546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.988 [2024-11-20 10:44:30.228552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.988 [2024-11-20 10:44:30.228557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.988 [2024-11-20 10:44:30.230001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.988 [2024-11-20 10:44:30.230096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.988 [2024-11-20 10:44:30.230097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.988 [2024-11-20 10:44:30.297409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:29.988 [2024-11-20 10:44:30.298207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:29.988 [2024-11-20 10:44:30.298466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:29.988 [2024-11-20 10:44:30.298624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 [2024-11-20 10:44:30.378915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 Malloc0 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 Delay0 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 [2024-11-20 10:44:30.462882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.988 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:29.988 [2024-11-20 10:44:30.593194] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:32.518 Initializing NVMe Controllers 00:27:32.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:32.518 controller IO queue size 128 less than required 00:27:32.518 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:32.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:32.518 Initialization complete. Launching workers. 00:27:32.518 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36934 00:27:32.518 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36991, failed to submit 66 00:27:32.518 success 36934, unsuccessful 57, failed 0 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.518 rmmod nvme_tcp 00:27:32.518 rmmod nvme_fabrics 00:27:32.518 rmmod nvme_keyring 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3651054 ']' 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3651054 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3651054 ']' 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3651054 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3651054 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3651054' 00:27:32.518 killing process with pid 3651054 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3651054 00:27:32.518 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3651054 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.518 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.422 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.422 00:27:34.422 real 0m11.210s 00:27:34.422 user 0m10.577s 00:27:34.422 sys 0m5.773s 00:27:34.422 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.422 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.422 ************************************ 00:27:34.422 END TEST nvmf_abort 00:27:34.422 ************************************ 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 ************************************ 00:27:34.682 START TEST nvmf_ns_hotplug_stress 00:27:34.682 ************************************ 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:34.682 * Looking for test storage... 00:27:34.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.682 --rc genhtml_branch_coverage=1 00:27:34.682 --rc genhtml_function_coverage=1 00:27:34.682 --rc genhtml_legend=1 00:27:34.682 --rc geninfo_all_blocks=1 00:27:34.682 --rc geninfo_unexecuted_blocks=1 00:27:34.682 00:27:34.682 ' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.682 --rc genhtml_branch_coverage=1 00:27:34.682 --rc genhtml_function_coverage=1 00:27:34.682 --rc genhtml_legend=1 00:27:34.682 --rc geninfo_all_blocks=1 00:27:34.682 --rc geninfo_unexecuted_blocks=1 00:27:34.682 00:27:34.682 ' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.682 --rc genhtml_branch_coverage=1 00:27:34.682 --rc genhtml_function_coverage=1 00:27:34.682 --rc genhtml_legend=1 00:27:34.682 --rc geninfo_all_blocks=1 00:27:34.682 --rc geninfo_unexecuted_blocks=1 00:27:34.682 00:27:34.682 ' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.682 --rc genhtml_branch_coverage=1 00:27:34.682 --rc genhtml_function_coverage=1 00:27:34.682 --rc genhtml_legend=1 00:27:34.682 --rc geninfo_all_blocks=1 00:27:34.682 --rc geninfo_unexecuted_blocks=1 00:27:34.682 00:27:34.682 ' 00:27:34.682 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:34.683 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:34.942 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.943 10:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.512 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.512 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.512 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:27:41.513 00:27:41.513 --- 10.0.0.2 ping statistics --- 00:27:41.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.513 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:27:41.513 00:27:41.513 --- 10.0.0.1 ping statistics --- 00:27:41.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.513 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3655047 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3655047 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3655047 ']' 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.513 10:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.513 [2024-11-20 10:44:41.382370] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:41.513 [2024-11-20 10:44:41.383322] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:27:41.513 [2024-11-20 10:44:41.383358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.513 [2024-11-20 10:44:41.459455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.513 [2024-11-20 10:44:41.500996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.513 [2024-11-20 10:44:41.501035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.513 [2024-11-20 10:44:41.501042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.513 [2024-11-20 10:44:41.501047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.513 [2024-11-20 10:44:41.501052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.513 [2024-11-20 10:44:41.502536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.513 [2024-11-20 10:44:41.502646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.513 [2024-11-20 10:44:41.502648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.513 [2024-11-20 10:44:41.570579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:41.513 [2024-11-20 10:44:41.571450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:41.513 [2024-11-20 10:44:41.571613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:41.513 [2024-11-20 10:44:41.571775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:41.513 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.513 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:41.513 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.513 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.513 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.772 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.772 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:41.772 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:41.772 [2024-11-20 10:44:42.423404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.772 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:42.030 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.288 [2024-11-20 10:44:42.827825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.288 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.546 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:42.546 Malloc0 00:27:42.804 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:42.804 Delay0 00:27:42.804 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.062 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:43.319 NULL1 00:27:43.319 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:43.576 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3655529 00:27:43.576 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:43.576 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:43.576 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.948 Read completed with error (sct=0, sc=11) 00:27:44.948 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.948 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:44.948 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:45.205 true 00:27:45.205 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:45.205 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.137 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.137 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:46.137 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:46.394 true 00:27:46.394 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:46.394 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.651 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.651 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:46.651 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:46.907 true 00:27:46.907 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:46.907 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.838 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.095 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:48.095 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:48.353 true 00:27:48.353 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:48.353 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.610 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.868 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:48.868 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:48.868 true 00:27:48.868 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:48.868 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.240 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.240 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:50.240 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:50.497 true 00:27:50.497 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:50.497 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.427 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.427 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:51.427 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:51.683 true 00:27:51.683 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:51.683 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.940 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.197 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:52.197 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:52.197 true 00:27:52.197 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:52.197 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.567 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.567 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:53.567 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:53.567 true 00:27:53.824 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:53.824 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.824 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.082 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:54.082 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:54.339 true 00:27:54.339 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:54.339 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.711 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.712 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:55.712 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:55.969 true 00:27:55.969 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:55.969 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.901 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.901 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:56.901 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:57.158 true 00:27:57.158 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:57.158 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.415 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.415 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:57.415 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:57.673 true 00:27:57.673 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:57.673 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.614 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.880 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:58.880 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:59.138 true 00:27:59.138 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:27:59.138 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.071 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.071 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:00.071 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:00.328 true 00:28:00.328 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:00.328 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.585 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.843 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:00.843 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:01.100 true 00:28:01.100 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:01.100 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.032 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.289 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:02.289 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:02.546 true 00:28:02.546 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:02.546 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.478 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.478 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:03.478 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:03.735 true 00:28:03.735 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:03.735 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.993 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.250 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:04.250 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:04.250 true 00:28:04.250 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:04.250 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.623 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.623 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:05.623 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:05.880 true 00:28:05.880 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:05.880 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.812 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.812 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:06.812 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:07.069 true 00:28:07.069 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:07.069 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.326 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.583 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:07.583 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:07.840 true 00:28:07.840 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:07.840 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.772 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.029 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:09.029 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:09.285 true 00:28:09.285 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:09.285 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.217 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.217 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:10.217 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:10.474 true 00:28:10.474 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:10.474 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.732 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.989 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:10.989 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:10.989 true 00:28:10.989 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:10.990 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.362 10:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:12.362 10:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:12.619 true 00:28:12.619 10:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:12.620 10:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.551 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.551 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:13.551 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:13.808 Initializing NVMe Controllers 00:28:13.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.808 Controller IO queue size 128, less than required. 00:28:13.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.808 Controller IO queue size 128, less than required. 00:28:13.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:13.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:13.808 Initialization complete. Launching workers. 00:28:13.808 ======================================================== 00:28:13.808 Latency(us) 00:28:13.808 Device Information : IOPS MiB/s Average min max 00:28:13.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2052.41 1.00 43230.70 2526.23 1013279.82 00:28:13.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17874.35 8.73 7161.36 1331.03 380709.39 00:28:13.808 ======================================================== 00:28:13.808 Total : 19926.76 9.73 10876.41 1331.03 1013279.82 00:28:13.808 00:28:13.808 true 00:28:13.808 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3655529 00:28:13.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3655529) - No such process 00:28:13.808 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3655529 00:28:13.808 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.066 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:14.324 null0 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.324 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:14.587 null1 00:28:14.587 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.587 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.587 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:14.879 null2 00:28:14.879 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.879 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.879 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:14.879 null3 00:28:14.879 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.879 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.879 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:15.157 null4 00:28:15.157 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.158 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.158 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:15.431 null5 00:28:15.431 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.431 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.431 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:15.431 null6 00:28:15.431 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.431 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.431 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:15.708 null7 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:15.708 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3661165 3661167 3661169 3661173 3661175 3661178 3661181 3661185 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.709 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.978 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.235 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.492 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.493 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.750 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.007 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.264 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.265 10:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.523 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.781 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.040 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.298 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.298 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.298 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.298 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.298 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.298 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.557 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.814 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.815 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.073 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.331 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.331 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.331 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.331 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.331 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.331 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.331 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.589 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.590 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.848 rmmod nvme_tcp 00:28:19.848 rmmod nvme_fabrics 00:28:19.848 rmmod nvme_keyring 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3655047 ']' 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3655047 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3655047 ']' 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3655047 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.848 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3655047 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3655047' 00:28:20.107 killing process with pid 3655047 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3655047 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3655047 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.107 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.644 00:28:22.644 real 0m47.661s 00:28:22.644 user 2m55.664s 00:28:22.644 sys 0m20.017s 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.644 ************************************ 00:28:22.644 END TEST nvmf_ns_hotplug_stress 00:28:22.644 ************************************ 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:22.644 ************************************ 00:28:22.644 START TEST nvmf_delete_subsystem 00:28:22.644 ************************************ 00:28:22.644 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:22.644 * Looking for test storage... 00:28:22.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.644 --rc genhtml_branch_coverage=1 00:28:22.644 --rc genhtml_function_coverage=1 00:28:22.644 --rc genhtml_legend=1 00:28:22.644 --rc geninfo_all_blocks=1 00:28:22.644 --rc geninfo_unexecuted_blocks=1 00:28:22.644 00:28:22.644 ' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.644 --rc genhtml_branch_coverage=1 00:28:22.644 --rc genhtml_function_coverage=1 00:28:22.644 --rc genhtml_legend=1 00:28:22.644 --rc geninfo_all_blocks=1 00:28:22.644 --rc geninfo_unexecuted_blocks=1 00:28:22.644 00:28:22.644 ' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.644 --rc genhtml_branch_coverage=1 00:28:22.644 --rc genhtml_function_coverage=1 00:28:22.644 --rc genhtml_legend=1 00:28:22.644 --rc geninfo_all_blocks=1 00:28:22.644 --rc geninfo_unexecuted_blocks=1 00:28:22.644 00:28:22.644 ' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.644 --rc genhtml_branch_coverage=1 00:28:22.644 --rc genhtml_function_coverage=1 00:28:22.644 --rc genhtml_legend=1 00:28:22.644 --rc geninfo_all_blocks=1 00:28:22.644 --rc geninfo_unexecuted_blocks=1 00:28:22.644 00:28:22.644 ' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.644 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.645 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.209 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:29.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:29.210 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:29.210 Found net devices under 0000:86:00.0: cvl_0_0 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:29.210 Found net devices under 0000:86:00.1: cvl_0_1 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.210 10:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:28:29.210 00:28:29.210 --- 10.0.0.2 ping statistics --- 00:28:29.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.210 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:29.210 00:28:29.210 --- 10.0.0.1 ping statistics --- 00:28:29.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.210 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.210 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3665537 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3665537 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3665537 ']' 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:29.211 [2024-11-20 10:45:29.134433] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:29.211 [2024-11-20 10:45:29.135336] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:28:29.211 [2024-11-20 10:45:29.135366] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.211 [2024-11-20 10:45:29.213126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:29.211 [2024-11-20 10:45:29.254071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.211 [2024-11-20 10:45:29.254110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.211 [2024-11-20 10:45:29.254117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.211 [2024-11-20 10:45:29.254123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.211 [2024-11-20 10:45:29.254128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.211 [2024-11-20 10:45:29.255350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.211 [2024-11-20 10:45:29.255351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.211 [2024-11-20 10:45:29.322675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:29.211 [2024-11-20 10:45:29.323211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:29.211 [2024-11-20 10:45:29.323421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 [2024-11-20 10:45:29.388166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 [2024-11-20 10:45:29.416473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 NULL1 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 Delay0 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3665630 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:29.211 10:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:29.211 [2024-11-20 10:45:29.530824] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:31.107 10:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.108 10:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.108 10:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 [2024-11-20 10:45:31.688064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c12c0 is same with the state(6) to be set 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 [2024-11-20 10:45:31.688483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c1860 is same with the state(6) to be set 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.108 Read completed with error (sct=0, sc=8) 00:28:31.108 starting I/O failed: -6 00:28:31.108 Write completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Write completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 Read completed with error (sct=0, sc=8) 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:31.109 starting I/O failed: -6 00:28:32.043 [2024-11-20 10:45:32.667895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c29a0 is same with the state(6) to be set 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Read completed with error (sct=0, sc=8) 00:28:32.043 Write completed with error (sct=0, sc=8) 00:28:32.044 [2024-11-20 10:45:32.691806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c1680 is same with the state(6) to be set 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 [2024-11-20 10:45:32.691945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c14a0 is same with the state(6) to be set 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 [2024-11-20 10:45:32.692598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f681400d020 is same with the state(6) to be set 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Write completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 Read completed with error (sct=0, sc=8) 00:28:32.044 [2024-11-20 10:45:32.693333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f681400d800 is same with the state(6) to be set 00:28:32.044 Initializing NVMe Controllers 00:28:32.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.044 Controller IO queue size 128, less than required. 00:28:32.044 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:32.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:32.044 Initialization complete. Launching workers. 00:28:32.044 ======================================================== 00:28:32.044 Latency(us) 00:28:32.044 Device Information : IOPS MiB/s Average min max 00:28:32.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.68 0.09 881367.41 428.96 1006151.30 00:28:32.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 177.67 0.09 925180.17 302.14 1009737.40 00:28:32.044 ======================================================== 00:28:32.044 Total : 353.34 0.17 903397.21 302.14 1009737.40 00:28:32.044 00:28:32.044 [2024-11-20 10:45:32.693994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4c29a0 (9): Bad file descriptor 00:28:32.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:32.044 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.044 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:32.044 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3665630 00:28:32.044 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3665630 00:28:32.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3665630) - No such process 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3665630 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3665630 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3665630 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:32.612 [2024-11-20 10:45:33.224401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3666246 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:32.612 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:32.612 [2024-11-20 10:45:33.307837] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:33.177 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.177 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:33.177 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.742 10:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.742 10:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:33.742 10:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:34.306 10:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:34.306 10:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:34.306 10:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:34.563 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:34.563 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:34.563 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.126 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:35.126 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:35.126 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.691 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:35.691 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:35.691 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.977 Initializing NVMe Controllers 00:28:35.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.977 Controller IO queue size 128, less than required. 00:28:35.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:35.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:35.977 Initialization complete. Launching workers. 00:28:35.977 ======================================================== 00:28:35.977 Latency(us) 00:28:35.977 Device Information : IOPS MiB/s Average min max 00:28:35.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002222.68 1000135.13 1006301.28 00:28:35.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003959.52 1000178.43 1009962.95 00:28:35.977 ======================================================== 00:28:35.977 Total : 256.00 0.12 1003091.10 1000135.13 1009962.95 00:28:35.977 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3666246 00:28:36.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3666246) - No such process 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3666246 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.236 rmmod nvme_tcp 00:28:36.236 rmmod nvme_fabrics 00:28:36.236 rmmod nvme_keyring 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3665537 ']' 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3665537 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3665537 ']' 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3665537 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665537 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665537' 00:28:36.236 killing process with pid 3665537 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3665537 00:28:36.236 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3665537 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.495 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.400 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.400 00:28:38.400 real 0m16.177s 00:28:38.400 user 0m26.169s 00:28:38.400 sys 0m6.106s 00:28:38.400 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.400 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:38.400 ************************************ 00:28:38.400 END TEST nvmf_delete_subsystem 00:28:38.400 ************************************ 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:38.662 ************************************ 00:28:38.662 START TEST nvmf_host_management 00:28:38.662 ************************************ 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:38.662 * Looking for test storage... 00:28:38.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:38.662 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.663 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.664 --rc genhtml_branch_coverage=1 00:28:38.664 --rc genhtml_function_coverage=1 00:28:38.664 --rc genhtml_legend=1 00:28:38.664 --rc geninfo_all_blocks=1 00:28:38.664 --rc geninfo_unexecuted_blocks=1 00:28:38.664 00:28:38.664 ' 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.664 --rc genhtml_branch_coverage=1 00:28:38.664 --rc genhtml_function_coverage=1 00:28:38.664 --rc genhtml_legend=1 00:28:38.664 --rc geninfo_all_blocks=1 00:28:38.664 --rc geninfo_unexecuted_blocks=1 00:28:38.664 00:28:38.664 ' 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.664 --rc genhtml_branch_coverage=1 00:28:38.664 --rc genhtml_function_coverage=1 00:28:38.664 --rc genhtml_legend=1 00:28:38.664 --rc geninfo_all_blocks=1 00:28:38.664 --rc geninfo_unexecuted_blocks=1 00:28:38.664 00:28:38.664 ' 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.664 --rc genhtml_branch_coverage=1 00:28:38.664 --rc genhtml_function_coverage=1 00:28:38.664 --rc genhtml_legend=1 00:28:38.664 --rc geninfo_all_blocks=1 00:28:38.664 --rc geninfo_unexecuted_blocks=1 00:28:38.664 00:28:38.664 ' 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.664 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.665 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.925 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.926 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.492 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:45.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:45.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:45.493 Found net devices under 0000:86:00.0: cvl_0_0 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:45.493 Found net devices under 0000:86:00.1: cvl_0_1 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.493 10:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:28:45.493 00:28:45.493 --- 10.0.0.2 ping statistics --- 00:28:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.493 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:28:45.493 00:28:45.493 --- 10.0.0.1 ping statistics --- 00:28:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.493 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.493 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3670245 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3670245 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3670245 ']' 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 [2024-11-20 10:45:45.329801] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.494 [2024-11-20 10:45:45.330812] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:28:45.494 [2024-11-20 10:45:45.330852] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.494 [2024-11-20 10:45:45.411378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.494 [2024-11-20 10:45:45.455244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.494 [2024-11-20 10:45:45.455292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.494 [2024-11-20 10:45:45.455300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.494 [2024-11-20 10:45:45.455306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.494 [2024-11-20 10:45:45.455314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.494 [2024-11-20 10:45:45.456859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.494 [2024-11-20 10:45:45.456984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.494 [2024-11-20 10:45:45.457014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.494 [2024-11-20 10:45:45.457015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:45.494 [2024-11-20 10:45:45.526001] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.494 [2024-11-20 10:45:45.526670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.494 [2024-11-20 10:45:45.526962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:45.494 [2024-11-20 10:45:45.527306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:45.494 [2024-11-20 10:45:45.527339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 [2024-11-20 10:45:45.593848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 Malloc0 00:28:45.494 [2024-11-20 10:45:45.686114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3670496 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3670496 /var/tmp/bdevperf.sock 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3670496 ']' 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:45.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.494 { 00:28:45.494 "params": { 00:28:45.494 "name": "Nvme$subsystem", 00:28:45.494 "trtype": "$TEST_TRANSPORT", 00:28:45.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.494 "adrfam": "ipv4", 00:28:45.494 "trsvcid": "$NVMF_PORT", 00:28:45.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.494 "hdgst": ${hdgst:-false}, 00:28:45.494 "ddgst": ${ddgst:-false} 00:28:45.494 }, 00:28:45.494 "method": "bdev_nvme_attach_controller" 00:28:45.494 } 00:28:45.494 EOF 00:28:45.494 )") 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:45.494 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.494 "params": { 00:28:45.494 "name": "Nvme0", 00:28:45.494 "trtype": "tcp", 00:28:45.494 "traddr": "10.0.0.2", 00:28:45.494 "adrfam": "ipv4", 00:28:45.494 "trsvcid": "4420", 00:28:45.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:45.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:45.494 "hdgst": false, 00:28:45.494 "ddgst": false 00:28:45.494 }, 00:28:45.494 "method": "bdev_nvme_attach_controller" 00:28:45.494 }' 00:28:45.494 [2024-11-20 10:45:45.780346] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:28:45.494 [2024-11-20 10:45:45.780395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670496 ] 00:28:45.494 [2024-11-20 10:45:45.857165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.494 [2024-11-20 10:45:45.898566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.494 Running I/O for 10 seconds... 00:28:45.494 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.494 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:45.494 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:45.494 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:45.495 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.752 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.011 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.011 [2024-11-20 10:45:46.501771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.501990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.501998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.502005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.502013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.502020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.502028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.502035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 10:45:46.502043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.011 [2024-11-20 10:45:46.502050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.012 [2024-11-20 10:45:46.502635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.012 [2024-11-20 10:45:46.502643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.013 [2024-11-20 10:45:46.502802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.502810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b810 is same with the state(6) to be set 00:28:46.013 [2024-11-20 10:45:46.503802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:46.013 task offset: 105984 on job bdev=Nvme0n1 fails 00:28:46.013 00:28:46.013 Latency(us) 00:28:46.013 [2024-11-20T09:45:46.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.013 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.013 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:46.013 Verification LBA range: start 0x0 length 0x400 00:28:46.013 Nvme0n1 : 0.41 1880.00 117.50 156.67 0.00 30574.19 1510.18 27810.06 00:28:46.013 [2024-11-20T09:45:46.744Z] =================================================================================================================== 00:28:46.013 [2024-11-20T09:45:46.744Z] Total : 1880.00 117.50 156.67 0.00 30574.19 1510.18 27810.06 00:28:46.013 [2024-11-20 10:45:46.506219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:46.013 [2024-11-20 10:45:46.506241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22500 (9): Bad file descriptor 00:28:46.013 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.013 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:46.013 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.013 [2024-11-20 10:45:46.507309] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:46.013 [2024-11-20 10:45:46.507379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:46.013 [2024-11-20 10:45:46.507402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.013 [2024-11-20 10:45:46.507417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:46.013 [2024-11-20 10:45:46.507424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:46.013 [2024-11-20 10:45:46.507433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.013 [2024-11-20 10:45:46.507440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb22500 00:28:46.013 [2024-11-20 10:45:46.507461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22500 (9): Bad file descriptor 00:28:46.013 [2024-11-20 10:45:46.507473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:46.013 [2024-11-20 10:45:46.507480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:46.013 [2024-11-20 10:45:46.507489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:46.013 [2024-11-20 10:45:46.507497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:46.013 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.013 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.013 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3670496 00:28:46.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3670496) - No such process 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.945 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.945 { 00:28:46.945 "params": { 00:28:46.945 "name": "Nvme$subsystem", 00:28:46.945 "trtype": "$TEST_TRANSPORT", 00:28:46.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.945 "adrfam": "ipv4", 00:28:46.945 "trsvcid": "$NVMF_PORT", 00:28:46.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.945 "hdgst": ${hdgst:-false}, 00:28:46.945 "ddgst": ${ddgst:-false} 00:28:46.945 }, 00:28:46.945 "method": "bdev_nvme_attach_controller" 00:28:46.945 } 00:28:46.945 EOF 00:28:46.946 )") 00:28:46.946 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:46.946 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:46.946 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:46.946 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:46.946 "params": { 00:28:46.946 "name": "Nvme0", 00:28:46.946 "trtype": "tcp", 00:28:46.946 "traddr": "10.0.0.2", 00:28:46.946 "adrfam": "ipv4", 00:28:46.946 "trsvcid": "4420", 00:28:46.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:46.946 "hdgst": false, 00:28:46.946 "ddgst": false 00:28:46.946 }, 00:28:46.946 "method": "bdev_nvme_attach_controller" 00:28:46.946 }' 00:28:46.946 [2024-11-20 10:45:47.572646] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:28:46.946 [2024-11-20 10:45:47.572692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670746 ] 00:28:46.946 [2024-11-20 10:45:47.648234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.203 [2024-11-20 10:45:47.688749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.459 Running I/O for 1 seconds... 00:28:48.391 1984.00 IOPS, 124.00 MiB/s 00:28:48.391 Latency(us) 00:28:48.391 [2024-11-20T09:45:49.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.391 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.391 Verification LBA range: start 0x0 length 0x400 00:28:48.391 Nvme0n1 : 1.01 2028.38 126.77 0.00 0.00 31043.00 6382.64 27240.18 00:28:48.391 [2024-11-20T09:45:49.122Z] =================================================================================================================== 00:28:48.391 [2024-11-20T09:45:49.122Z] Total : 2028.38 126.77 0.00 0.00 31043.00 6382.64 27240.18 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.648 rmmod nvme_tcp 00:28:48.648 rmmod nvme_fabrics 00:28:48.648 rmmod nvme_keyring 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3670245 ']' 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3670245 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3670245 ']' 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3670245 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3670245 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3670245' 00:28:48.648 killing process with pid 3670245 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3670245 00:28:48.648 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3670245 00:28:48.906 [2024-11-20 10:45:49.507927] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.906 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:51.439 00:28:51.439 real 0m12.414s 00:28:51.439 user 0m18.417s 00:28:51.439 sys 0m6.271s 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.439 ************************************ 00:28:51.439 END TEST nvmf_host_management 00:28:51.439 ************************************ 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:51.439 ************************************ 00:28:51.439 START TEST nvmf_lvol 00:28:51.439 ************************************ 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:51.439 * Looking for test storage... 00:28:51.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:51.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.439 --rc genhtml_branch_coverage=1 00:28:51.439 --rc genhtml_function_coverage=1 00:28:51.439 --rc genhtml_legend=1 00:28:51.439 --rc geninfo_all_blocks=1 00:28:51.439 --rc geninfo_unexecuted_blocks=1 00:28:51.439 00:28:51.439 ' 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:51.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.439 --rc genhtml_branch_coverage=1 00:28:51.439 --rc genhtml_function_coverage=1 00:28:51.439 --rc genhtml_legend=1 00:28:51.439 --rc geninfo_all_blocks=1 00:28:51.439 --rc geninfo_unexecuted_blocks=1 00:28:51.439 00:28:51.439 ' 00:28:51.439 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:51.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.439 --rc genhtml_branch_coverage=1 00:28:51.439 --rc genhtml_function_coverage=1 00:28:51.439 --rc genhtml_legend=1 00:28:51.439 --rc geninfo_all_blocks=1 00:28:51.439 --rc geninfo_unexecuted_blocks=1 00:28:51.439 00:28:51.440 ' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:51.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.440 --rc genhtml_branch_coverage=1 00:28:51.440 --rc genhtml_function_coverage=1 00:28:51.440 --rc genhtml_legend=1 00:28:51.440 --rc geninfo_all_blocks=1 00:28:51.440 --rc geninfo_unexecuted_blocks=1 00:28:51.440 00:28:51.440 ' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.440 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.007 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.007 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.007 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.007 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.007 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:28:58.008 00:28:58.008 --- 10.0.0.2 ping statistics --- 00:28:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.008 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:58.008 00:28:58.008 --- 10.0.0.1 ping statistics --- 00:28:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.008 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3674503 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3674503 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3674503 ']' 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.008 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:58.008 [2024-11-20 10:45:57.839403] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:58.008 [2024-11-20 10:45:57.840294] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:28:58.008 [2024-11-20 10:45:57.840327] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.008 [2024-11-20 10:45:57.920213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.008 [2024-11-20 10:45:57.960942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.008 [2024-11-20 10:45:57.960982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.008 [2024-11-20 10:45:57.960989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.008 [2024-11-20 10:45:57.960994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.008 [2024-11-20 10:45:57.960999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.008 [2024-11-20 10:45:57.962444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.008 [2024-11-20 10:45:57.962552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.008 [2024-11-20 10:45:57.962553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.008 [2024-11-20 10:45:58.031221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:58.008 [2024-11-20 10:45:58.032066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:58.008 [2024-11-20 10:45:58.032266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:58.008 [2024-11-20 10:45:58.032402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:58.008 [2024-11-20 10:45:58.271407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:58.008 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:58.266 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:58.266 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:58.266 10:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:58.525 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=99bc1f18-88e6-4a5a-8b08-a8bcdb16a809 00:28:58.525 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99bc1f18-88e6-4a5a-8b08-a8bcdb16a809 lvol 20 00:28:58.782 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fc435636-48d9-485b-87c6-9b42a7fff247 00:28:58.782 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:59.040 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc435636-48d9-485b-87c6-9b42a7fff247 00:28:59.040 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:59.298 [2024-11-20 10:45:59.935250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.298 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:59.555 10:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3674973 00:28:59.555 10:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:59.555 10:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:00.486 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fc435636-48d9-485b-87c6-9b42a7fff247 MY_SNAPSHOT 00:29:00.744 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=60a75f79-0cec-4547-b173-17311fab5efe 00:29:00.744 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fc435636-48d9-485b-87c6-9b42a7fff247 30 00:29:01.002 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 60a75f79-0cec-4547-b173-17311fab5efe MY_CLONE 00:29:01.261 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=854a5c30-b35c-427b-a43c-786b2181d038 00:29:01.261 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 854a5c30-b35c-427b-a43c-786b2181d038 00:29:01.827 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3674973 00:29:09.995 Initializing NVMe Controllers 00:29:09.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:09.995 Controller IO queue size 128, less than required. 00:29:09.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:09.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:09.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:09.995 Initialization complete. Launching workers. 00:29:09.995 ======================================================== 00:29:09.995 Latency(us) 00:29:09.995 Device Information : IOPS MiB/s Average min max 00:29:09.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12129.40 47.38 10553.61 771.29 60479.15 00:29:09.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12069.80 47.15 10609.86 5590.59 60474.29 00:29:09.995 ======================================================== 00:29:09.995 Total : 24199.20 94.53 10581.67 771.29 60479.15 00:29:09.995 00:29:09.995 10:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:10.304 10:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc435636-48d9-485b-87c6-9b42a7fff247 00:29:10.562 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99bc1f18-88e6-4a5a-8b08-a8bcdb16a809 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.820 rmmod nvme_tcp 00:29:10.820 rmmod nvme_fabrics 00:29:10.820 rmmod nvme_keyring 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3674503 ']' 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3674503 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3674503 ']' 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3674503 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674503 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674503' 00:29:10.820 killing process with pid 3674503 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3674503 00:29:10.820 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3674503 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.078 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.982 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:12.982 00:29:12.982 real 0m22.010s 00:29:12.982 user 0m56.155s 00:29:12.982 sys 0m9.913s 00:29:12.982 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.982 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:12.982 ************************************ 00:29:12.982 END TEST nvmf_lvol 00:29:12.982 ************************************ 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:13.241 ************************************ 00:29:13.241 START TEST nvmf_lvs_grow 00:29:13.241 ************************************ 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:13.241 * Looking for test storage... 00:29:13.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:13.241 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.242 --rc genhtml_branch_coverage=1 00:29:13.242 --rc genhtml_function_coverage=1 00:29:13.242 --rc genhtml_legend=1 00:29:13.242 --rc geninfo_all_blocks=1 00:29:13.242 --rc geninfo_unexecuted_blocks=1 00:29:13.242 00:29:13.242 ' 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.242 --rc genhtml_branch_coverage=1 00:29:13.242 --rc genhtml_function_coverage=1 00:29:13.242 --rc genhtml_legend=1 00:29:13.242 --rc geninfo_all_blocks=1 00:29:13.242 --rc geninfo_unexecuted_blocks=1 00:29:13.242 00:29:13.242 ' 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.242 --rc genhtml_branch_coverage=1 00:29:13.242 --rc genhtml_function_coverage=1 00:29:13.242 --rc genhtml_legend=1 00:29:13.242 --rc geninfo_all_blocks=1 00:29:13.242 --rc geninfo_unexecuted_blocks=1 00:29:13.242 00:29:13.242 ' 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.242 --rc genhtml_branch_coverage=1 00:29:13.242 --rc genhtml_function_coverage=1 00:29:13.242 --rc genhtml_legend=1 00:29:13.242 --rc geninfo_all_blocks=1 00:29:13.242 --rc geninfo_unexecuted_blocks=1 00:29:13.242 00:29:13.242 ' 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.242 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.501 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:20.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.069 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:20.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:20.070 Found net devices under 0000:86:00.0: cvl_0_0 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:20.070 Found net devices under 0000:86:00.1: cvl_0_1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:29:20.070 00:29:20.070 --- 10.0.0.2 ping statistics --- 00:29:20.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.070 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:29:20.070 00:29:20.070 --- 10.0.0.1 ping statistics --- 00:29:20.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.070 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3680139 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3680139 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3680139 ']' 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.070 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:20.070 [2024-11-20 10:46:19.934553] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:20.070 [2024-11-20 10:46:19.935488] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:29:20.070 [2024-11-20 10:46:19.935520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.070 [2024-11-20 10:46:20.014591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.070 [2024-11-20 10:46:20.069651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.070 [2024-11-20 10:46:20.069686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.070 [2024-11-20 10:46:20.069694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.070 [2024-11-20 10:46:20.069700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.071 [2024-11-20 10:46:20.069706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.071 [2024-11-20 10:46:20.070256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.071 [2024-11-20 10:46:20.137789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:20.071 [2024-11-20 10:46:20.138014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:20.071 [2024-11-20 10:46:20.374896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:20.071 ************************************ 00:29:20.071 START TEST lvs_grow_clean 00:29:20.071 ************************************ 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:20.071 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:20.329 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:20.329 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:20.329 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:20.587 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:20.587 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:20.587 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 106d0970-9bd3-49dd-9d61-f44d5af211ab lvol 150 00:29:20.587 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=512dba11-9065-4245-9aae-c7dbf91dbbe2 00:29:20.587 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:20.587 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:20.846 [2024-11-20 10:46:21.422644] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:20.846 [2024-11-20 10:46:21.422774] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:20.846 true 00:29:20.846 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:20.846 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:21.104 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:21.104 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:21.363 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 512dba11-9065-4245-9aae-c7dbf91dbbe2 00:29:21.363 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:21.621 [2024-11-20 10:46:22.215143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.621 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:21.878 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3680628 00:29:21.878 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.878 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:21.878 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3680628 /var/tmp/bdevperf.sock 00:29:21.879 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3680628 ']' 00:29:21.879 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:21.879 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.879 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:21.879 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.879 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.879 [2024-11-20 10:46:22.491913] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:29:21.879 [2024-11-20 10:46:22.491967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680628 ] 00:29:21.879 [2024-11-20 10:46:22.567916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.136 [2024-11-20 10:46:22.610969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.136 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.136 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:22.136 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:22.395 Nvme0n1 00:29:22.395 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:22.653 [ 00:29:22.653 { 00:29:22.653 "name": "Nvme0n1", 00:29:22.653 "aliases": [ 00:29:22.653 "512dba11-9065-4245-9aae-c7dbf91dbbe2" 00:29:22.653 ], 00:29:22.653 "product_name": "NVMe disk", 00:29:22.653 "block_size": 4096, 00:29:22.653 "num_blocks": 38912, 00:29:22.653 "uuid": "512dba11-9065-4245-9aae-c7dbf91dbbe2", 00:29:22.653 "numa_id": 1, 00:29:22.653 "assigned_rate_limits": { 00:29:22.653 "rw_ios_per_sec": 0, 00:29:22.653 "rw_mbytes_per_sec": 0, 00:29:22.653 "r_mbytes_per_sec": 0, 00:29:22.653 "w_mbytes_per_sec": 0 00:29:22.653 }, 00:29:22.653 "claimed": false, 00:29:22.653 "zoned": false, 00:29:22.654 "supported_io_types": { 00:29:22.654 "read": true, 00:29:22.654 "write": true, 00:29:22.654 "unmap": true, 00:29:22.654 "flush": true, 00:29:22.654 "reset": true, 00:29:22.654 "nvme_admin": true, 00:29:22.654 "nvme_io": true, 00:29:22.654 "nvme_io_md": false, 00:29:22.654 "write_zeroes": true, 00:29:22.654 "zcopy": false, 00:29:22.654 "get_zone_info": false, 00:29:22.654 "zone_management": false, 00:29:22.654 "zone_append": false, 00:29:22.654 "compare": true, 00:29:22.654 "compare_and_write": true, 00:29:22.654 "abort": true, 00:29:22.654 "seek_hole": false, 00:29:22.654 "seek_data": false, 00:29:22.654 "copy": true, 00:29:22.654 "nvme_iov_md": false 00:29:22.654 }, 00:29:22.654 "memory_domains": [ 00:29:22.654 { 00:29:22.654 "dma_device_id": "system", 00:29:22.654 "dma_device_type": 1 00:29:22.654 } 00:29:22.654 ], 00:29:22.654 "driver_specific": { 00:29:22.654 "nvme": [ 00:29:22.654 { 00:29:22.654 "trid": { 00:29:22.654 "trtype": "TCP", 00:29:22.654 "adrfam": "IPv4", 00:29:22.654 "traddr": "10.0.0.2", 00:29:22.654 "trsvcid": "4420", 00:29:22.654 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:22.654 }, 00:29:22.654 "ctrlr_data": { 00:29:22.654 "cntlid": 1, 00:29:22.654 "vendor_id": "0x8086", 00:29:22.654 "model_number": "SPDK bdev Controller", 00:29:22.654 "serial_number": "SPDK0", 00:29:22.654 "firmware_revision": "25.01", 00:29:22.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.654 "oacs": { 00:29:22.654 "security": 0, 00:29:22.654 "format": 0, 00:29:22.654 "firmware": 0, 00:29:22.654 "ns_manage": 0 00:29:22.654 }, 00:29:22.654 "multi_ctrlr": true, 00:29:22.654 "ana_reporting": false 00:29:22.654 }, 00:29:22.654 "vs": { 00:29:22.654 "nvme_version": "1.3" 00:29:22.654 }, 00:29:22.654 "ns_data": { 00:29:22.654 "id": 1, 00:29:22.654 "can_share": true 00:29:22.654 } 00:29:22.654 } 00:29:22.654 ], 00:29:22.654 "mp_policy": "active_passive" 00:29:22.654 } 00:29:22.654 } 00:29:22.654 ] 00:29:22.654 10:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3680761 00:29:22.654 10:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:22.654 10:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:22.654 Running I/O for 10 seconds... 00:29:23.587 Latency(us) 00:29:23.587 [2024-11-20T09:46:24.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.587 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:23.587 [2024-11-20T09:46:24.318Z] =================================================================================================================== 00:29:23.587 [2024-11-20T09:46:24.318Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:23.587 00:29:24.520 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:24.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.520 Nvme0n1 : 2.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:24.520 [2024-11-20T09:46:25.251Z] =================================================================================================================== 00:29:24.520 [2024-11-20T09:46:25.251Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:24.520 00:29:24.777 true 00:29:24.777 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:24.777 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:25.035 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:25.035 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:25.035 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3680761 00:29:25.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.600 Nvme0n1 : 3.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:25.600 [2024-11-20T09:46:26.331Z] =================================================================================================================== 00:29:25.600 [2024-11-20T09:46:26.331Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:25.600 00:29:26.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.534 Nvme0n1 : 4.00 22574.25 88.18 0.00 0.00 0.00 0.00 0.00 00:29:26.534 [2024-11-20T09:46:27.265Z] =================================================================================================================== 00:29:26.534 [2024-11-20T09:46:27.265Z] Total : 22574.25 88.18 0.00 0.00 0.00 0.00 0.00 00:29:26.534 00:29:27.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.908 Nvme0n1 : 5.00 22609.40 88.32 0.00 0.00 0.00 0.00 0.00 00:29:27.908 [2024-11-20T09:46:28.639Z] =================================================================================================================== 00:29:27.908 [2024-11-20T09:46:28.639Z] Total : 22609.40 88.32 0.00 0.00 0.00 0.00 0.00 00:29:27.908 00:29:28.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.841 Nvme0n1 : 6.00 22661.83 88.52 0.00 0.00 0.00 0.00 0.00 00:29:28.841 [2024-11-20T09:46:29.572Z] =================================================================================================================== 00:29:28.841 [2024-11-20T09:46:29.572Z] Total : 22661.83 88.52 0.00 0.00 0.00 0.00 0.00 00:29:28.841 00:29:29.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.774 Nvme0n1 : 7.00 22672.00 88.56 0.00 0.00 0.00 0.00 0.00 00:29:29.774 [2024-11-20T09:46:30.505Z] =================================================================================================================== 00:29:29.774 [2024-11-20T09:46:30.505Z] Total : 22672.00 88.56 0.00 0.00 0.00 0.00 0.00 00:29:29.774 00:29:30.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.708 Nvme0n1 : 8.00 22695.50 88.65 0.00 0.00 0.00 0.00 0.00 00:29:30.708 [2024-11-20T09:46:31.439Z] =================================================================================================================== 00:29:30.708 [2024-11-20T09:46:31.439Z] Total : 22695.50 88.65 0.00 0.00 0.00 0.00 0.00 00:29:30.708 00:29:31.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.641 Nvme0n1 : 9.00 22727.89 88.78 0.00 0.00 0.00 0.00 0.00 00:29:31.641 [2024-11-20T09:46:32.372Z] =================================================================================================================== 00:29:31.641 [2024-11-20T09:46:32.372Z] Total : 22727.89 88.78 0.00 0.00 0.00 0.00 0.00 00:29:31.641 00:29:32.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.572 Nvme0n1 : 10.00 22741.10 88.83 0.00 0.00 0.00 0.00 0.00 00:29:32.572 [2024-11-20T09:46:33.303Z] =================================================================================================================== 00:29:32.572 [2024-11-20T09:46:33.303Z] Total : 22741.10 88.83 0.00 0.00 0.00 0.00 0.00 00:29:32.572 00:29:32.572 00:29:32.572 Latency(us) 00:29:32.572 [2024-11-20T09:46:33.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.572 Nvme0n1 : 10.00 22747.74 88.86 0.00 0.00 5623.88 3291.05 25986.45 00:29:32.572 [2024-11-20T09:46:33.303Z] =================================================================================================================== 00:29:32.572 [2024-11-20T09:46:33.303Z] Total : 22747.74 88.86 0.00 0.00 5623.88 3291.05 25986.45 00:29:32.572 { 00:29:32.572 "results": [ 00:29:32.572 { 00:29:32.572 "job": "Nvme0n1", 00:29:32.572 "core_mask": "0x2", 00:29:32.572 "workload": "randwrite", 00:29:32.572 "status": "finished", 00:29:32.572 "queue_depth": 128, 00:29:32.572 "io_size": 4096, 00:29:32.572 "runtime": 10.002709, 00:29:32.572 "iops": 22747.7376378739, 00:29:32.572 "mibps": 88.85835014794492, 00:29:32.572 "io_failed": 0, 00:29:32.572 "io_timeout": 0, 00:29:32.572 "avg_latency_us": 5623.876028514557, 00:29:32.572 "min_latency_us": 3291.046956521739, 00:29:32.572 "max_latency_us": 25986.448695652172 00:29:32.572 } 00:29:32.572 ], 00:29:32.572 "core_count": 1 00:29:32.572 } 00:29:32.572 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3680628 00:29:32.572 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3680628 ']' 00:29:32.572 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3680628 00:29:32.572 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:32.572 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.572 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3680628 00:29:32.830 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.830 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.830 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3680628' 00:29:32.830 killing process with pid 3680628 00:29:32.830 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3680628 00:29:32.830 Received shutdown signal, test time was about 10.000000 seconds 00:29:32.830 00:29:32.830 Latency(us) 00:29:32.830 [2024-11-20T09:46:33.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.830 [2024-11-20T09:46:33.561Z] =================================================================================================================== 00:29:32.830 [2024-11-20T09:46:33.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.830 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3680628 00:29:32.830 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.088 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:33.346 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:33.346 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:33.604 [2024-11-20 10:46:34.270709] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:33.604 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.605 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:33.605 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:33.605 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:33.862 request: 00:29:33.862 { 00:29:33.862 "uuid": "106d0970-9bd3-49dd-9d61-f44d5af211ab", 00:29:33.862 "method": "bdev_lvol_get_lvstores", 00:29:33.862 "req_id": 1 00:29:33.862 } 00:29:33.862 Got JSON-RPC error response 00:29:33.862 response: 00:29:33.862 { 00:29:33.862 "code": -19, 00:29:33.862 "message": "No such device" 00:29:33.862 } 00:29:33.862 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:33.862 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:33.862 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:33.862 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:33.862 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:34.120 aio_bdev 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 512dba11-9065-4245-9aae-c7dbf91dbbe2 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=512dba11-9065-4245-9aae-c7dbf91dbbe2 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:34.120 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:34.379 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 512dba11-9065-4245-9aae-c7dbf91dbbe2 -t 2000 00:29:34.379 [ 00:29:34.379 { 00:29:34.379 "name": "512dba11-9065-4245-9aae-c7dbf91dbbe2", 00:29:34.379 "aliases": [ 00:29:34.379 "lvs/lvol" 00:29:34.379 ], 00:29:34.379 "product_name": "Logical Volume", 00:29:34.379 "block_size": 4096, 00:29:34.379 "num_blocks": 38912, 00:29:34.379 "uuid": "512dba11-9065-4245-9aae-c7dbf91dbbe2", 00:29:34.379 "assigned_rate_limits": { 00:29:34.379 "rw_ios_per_sec": 0, 00:29:34.379 "rw_mbytes_per_sec": 0, 00:29:34.379 "r_mbytes_per_sec": 0, 00:29:34.379 "w_mbytes_per_sec": 0 00:29:34.379 }, 00:29:34.379 "claimed": false, 00:29:34.379 "zoned": false, 00:29:34.379 "supported_io_types": { 00:29:34.379 "read": true, 00:29:34.379 "write": true, 00:29:34.379 "unmap": true, 00:29:34.379 "flush": false, 00:29:34.379 "reset": true, 00:29:34.379 "nvme_admin": false, 00:29:34.379 "nvme_io": false, 00:29:34.379 "nvme_io_md": false, 00:29:34.379 "write_zeroes": true, 00:29:34.379 "zcopy": false, 00:29:34.379 "get_zone_info": false, 00:29:34.379 "zone_management": false, 00:29:34.379 "zone_append": false, 00:29:34.379 "compare": false, 00:29:34.379 "compare_and_write": false, 00:29:34.379 "abort": false, 00:29:34.379 "seek_hole": true, 00:29:34.379 "seek_data": true, 00:29:34.379 "copy": false, 00:29:34.379 "nvme_iov_md": false 00:29:34.379 }, 00:29:34.379 "driver_specific": { 00:29:34.379 "lvol": { 00:29:34.379 "lvol_store_uuid": "106d0970-9bd3-49dd-9d61-f44d5af211ab", 00:29:34.379 "base_bdev": "aio_bdev", 00:29:34.379 "thin_provision": false, 00:29:34.379 "num_allocated_clusters": 38, 00:29:34.379 "snapshot": false, 00:29:34.379 "clone": false, 00:29:34.379 "esnap_clone": false 00:29:34.379 } 00:29:34.379 } 00:29:34.379 } 00:29:34.379 ] 00:29:34.638 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:34.638 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:34.638 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:34.638 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:34.638 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:34.638 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:34.896 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:34.896 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 512dba11-9065-4245-9aae-c7dbf91dbbe2 00:29:35.155 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 106d0970-9bd3-49dd-9d61-f44d5af211ab 00:29:35.414 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:35.414 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:35.673 00:29:35.673 real 0m15.742s 00:29:35.673 user 0m15.184s 00:29:35.673 sys 0m1.556s 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.673 ************************************ 00:29:35.673 END TEST lvs_grow_clean 00:29:35.673 ************************************ 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:35.673 ************************************ 00:29:35.673 START TEST lvs_grow_dirty 00:29:35.673 ************************************ 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:35.673 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:35.931 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:35.931 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:35.931 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=eb311e27-85cf-4e77-b161-cced995cbea0 00:29:36.190 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:36.190 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:36.190 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:36.190 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:36.190 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eb311e27-85cf-4e77-b161-cced995cbea0 lvol 150 00:29:36.448 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:36.448 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:36.448 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:36.707 [2024-11-20 10:46:37.238643] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:36.707 [2024-11-20 10:46:37.238772] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:36.707 true 00:29:36.707 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:36.707 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:36.966 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:36.966 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:36.966 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:37.235 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.499 [2024-11-20 10:46:38.031092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3683215 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3683215 /var/tmp/bdevperf.sock 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3683215 ']' 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:37.499 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.500 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:37.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:37.500 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.500 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:37.758 [2024-11-20 10:46:38.267043] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:29:37.758 [2024-11-20 10:46:38.267098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683215 ] 00:29:37.758 [2024-11-20 10:46:38.340865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.758 [2024-11-20 10:46:38.382952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.758 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.758 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:37.758 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:38.015 Nvme0n1 00:29:38.273 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:38.273 [ 00:29:38.273 { 00:29:38.273 "name": "Nvme0n1", 00:29:38.273 "aliases": [ 00:29:38.273 "c6874c76-ee95-4a33-8d08-a3208855f5a3" 00:29:38.273 ], 00:29:38.273 "product_name": "NVMe disk", 00:29:38.273 "block_size": 4096, 00:29:38.273 "num_blocks": 38912, 00:29:38.273 "uuid": "c6874c76-ee95-4a33-8d08-a3208855f5a3", 00:29:38.273 "numa_id": 1, 00:29:38.273 "assigned_rate_limits": { 00:29:38.273 "rw_ios_per_sec": 0, 00:29:38.273 "rw_mbytes_per_sec": 0, 00:29:38.273 "r_mbytes_per_sec": 0, 00:29:38.273 "w_mbytes_per_sec": 0 00:29:38.273 }, 00:29:38.273 "claimed": false, 00:29:38.273 "zoned": false, 00:29:38.273 "supported_io_types": { 00:29:38.273 "read": true, 00:29:38.273 "write": true, 00:29:38.273 "unmap": true, 00:29:38.273 "flush": true, 00:29:38.273 "reset": true, 00:29:38.273 "nvme_admin": true, 00:29:38.273 "nvme_io": true, 00:29:38.273 "nvme_io_md": false, 00:29:38.273 "write_zeroes": true, 00:29:38.273 "zcopy": false, 00:29:38.273 "get_zone_info": false, 00:29:38.273 "zone_management": false, 00:29:38.273 "zone_append": false, 00:29:38.273 "compare": true, 00:29:38.273 "compare_and_write": true, 00:29:38.273 "abort": true, 00:29:38.273 "seek_hole": false, 00:29:38.273 "seek_data": false, 00:29:38.273 "copy": true, 00:29:38.273 "nvme_iov_md": false 00:29:38.273 }, 00:29:38.273 "memory_domains": [ 00:29:38.273 { 00:29:38.273 "dma_device_id": "system", 00:29:38.273 "dma_device_type": 1 00:29:38.273 } 00:29:38.273 ], 00:29:38.273 "driver_specific": { 00:29:38.273 "nvme": [ 00:29:38.273 { 00:29:38.273 "trid": { 00:29:38.273 "trtype": "TCP", 00:29:38.273 "adrfam": "IPv4", 00:29:38.273 "traddr": "10.0.0.2", 00:29:38.273 "trsvcid": "4420", 00:29:38.273 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:38.273 }, 00:29:38.273 "ctrlr_data": { 00:29:38.273 "cntlid": 1, 00:29:38.273 "vendor_id": "0x8086", 00:29:38.273 "model_number": "SPDK bdev Controller", 00:29:38.273 "serial_number": "SPDK0", 00:29:38.273 "firmware_revision": "25.01", 00:29:38.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.273 "oacs": { 00:29:38.273 "security": 0, 00:29:38.273 "format": 0, 00:29:38.273 "firmware": 0, 00:29:38.273 "ns_manage": 0 00:29:38.273 }, 00:29:38.273 "multi_ctrlr": true, 00:29:38.273 "ana_reporting": false 00:29:38.273 }, 00:29:38.273 "vs": { 00:29:38.273 "nvme_version": "1.3" 00:29:38.273 }, 00:29:38.273 "ns_data": { 00:29:38.273 "id": 1, 00:29:38.273 "can_share": true 00:29:38.273 } 00:29:38.273 } 00:29:38.273 ], 00:29:38.273 "mp_policy": "active_passive" 00:29:38.273 } 00:29:38.273 } 00:29:38.273 ] 00:29:38.273 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3683318 00:29:38.273 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:38.273 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.532 Running I/O for 10 seconds... 00:29:39.465 Latency(us) 00:29:39.465 [2024-11-20T09:46:40.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.465 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:39.465 [2024-11-20T09:46:40.197Z] =================================================================================================================== 00:29:39.466 [2024-11-20T09:46:40.197Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:39.466 00:29:40.399 10:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:40.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.399 Nvme0n1 : 2.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:40.399 [2024-11-20T09:46:41.130Z] =================================================================================================================== 00:29:40.399 [2024-11-20T09:46:41.130Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:40.399 00:29:40.399 true 00:29:40.657 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:40.657 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:40.657 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:40.657 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:40.657 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3683318 00:29:41.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.592 Nvme0n1 : 3.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:41.592 [2024-11-20T09:46:42.323Z] =================================================================================================================== 00:29:41.592 [2024-11-20T09:46:42.323Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:41.592 00:29:42.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.526 Nvme0n1 : 4.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:42.526 [2024-11-20T09:46:43.257Z] =================================================================================================================== 00:29:42.526 [2024-11-20T09:46:43.257Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:42.526 00:29:43.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.458 Nvme0n1 : 5.00 22555.20 88.11 0.00 0.00 0.00 0.00 0.00 00:29:43.458 [2024-11-20T09:46:44.189Z] =================================================================================================================== 00:29:43.458 [2024-11-20T09:46:44.189Z] Total : 22555.20 88.11 0.00 0.00 0.00 0.00 0.00 00:29:43.458 00:29:44.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.388 Nvme0n1 : 6.00 22627.17 88.39 0.00 0.00 0.00 0.00 0.00 00:29:44.388 [2024-11-20T09:46:45.119Z] =================================================================================================================== 00:29:44.388 [2024-11-20T09:46:45.119Z] Total : 22627.17 88.39 0.00 0.00 0.00 0.00 0.00 00:29:44.388 00:29:45.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.320 Nvme0n1 : 7.00 22660.43 88.52 0.00 0.00 0.00 0.00 0.00 00:29:45.320 [2024-11-20T09:46:46.051Z] =================================================================================================================== 00:29:45.320 [2024-11-20T09:46:46.051Z] Total : 22660.43 88.52 0.00 0.00 0.00 0.00 0.00 00:29:45.320 00:29:46.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.693 Nvme0n1 : 8.00 22701.25 88.68 0.00 0.00 0.00 0.00 0.00 00:29:46.693 [2024-11-20T09:46:47.424Z] =================================================================================================================== 00:29:46.693 [2024-11-20T09:46:47.424Z] Total : 22701.25 88.68 0.00 0.00 0.00 0.00 0.00 00:29:46.693 00:29:47.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.626 Nvme0n1 : 9.00 22718.89 88.75 0.00 0.00 0.00 0.00 0.00 00:29:47.626 [2024-11-20T09:46:48.357Z] =================================================================================================================== 00:29:47.626 [2024-11-20T09:46:48.357Z] Total : 22718.89 88.75 0.00 0.00 0.00 0.00 0.00 00:29:47.626 00:29:48.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.559 Nvme0n1 : 10.00 22745.70 88.85 0.00 0.00 0.00 0.00 0.00 00:29:48.559 [2024-11-20T09:46:49.290Z] =================================================================================================================== 00:29:48.559 [2024-11-20T09:46:49.290Z] Total : 22745.70 88.85 0.00 0.00 0.00 0.00 0.00 00:29:48.559 00:29:48.559 00:29:48.559 Latency(us) 00:29:48.559 [2024-11-20T09:46:49.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.559 Nvme0n1 : 10.01 22745.37 88.85 0.00 0.00 5624.57 5014.93 26556.33 00:29:48.559 [2024-11-20T09:46:49.290Z] =================================================================================================================== 00:29:48.559 [2024-11-20T09:46:49.290Z] Total : 22745.37 88.85 0.00 0.00 5624.57 5014.93 26556.33 00:29:48.559 { 00:29:48.559 "results": [ 00:29:48.559 { 00:29:48.559 "job": "Nvme0n1", 00:29:48.559 "core_mask": "0x2", 00:29:48.559 "workload": "randwrite", 00:29:48.559 "status": "finished", 00:29:48.559 "queue_depth": 128, 00:29:48.559 "io_size": 4096, 00:29:48.559 "runtime": 10.005772, 00:29:48.559 "iops": 22745.371371644287, 00:29:48.559 "mibps": 88.8491069204855, 00:29:48.559 "io_failed": 0, 00:29:48.559 "io_timeout": 0, 00:29:48.559 "avg_latency_us": 5624.570666967239, 00:29:48.559 "min_latency_us": 5014.928695652174, 00:29:48.559 "max_latency_us": 26556.326956521738 00:29:48.559 } 00:29:48.559 ], 00:29:48.559 "core_count": 1 00:29:48.559 } 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3683215 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3683215 ']' 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3683215 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683215 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683215' 00:29:48.559 killing process with pid 3683215 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3683215 00:29:48.559 Received shutdown signal, test time was about 10.000000 seconds 00:29:48.559 00:29:48.559 Latency(us) 00:29:48.559 [2024-11-20T09:46:49.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.559 [2024-11-20T09:46:49.290Z] =================================================================================================================== 00:29:48.559 [2024-11-20T09:46:49.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3683215 00:29:48.559 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.817 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:49.075 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:49.075 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3680139 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3680139 00:29:49.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3680139 Killed "${NVMF_APP[@]}" "$@" 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3685060 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3685060 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3685060 ']' 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.334 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:49.334 [2024-11-20 10:46:49.999148] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:49.334 [2024-11-20 10:46:50.000090] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:29:49.334 [2024-11-20 10:46:50.000127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.592 [2024-11-20 10:46:50.070328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.592 [2024-11-20 10:46:50.112546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.592 [2024-11-20 10:46:50.112579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.592 [2024-11-20 10:46:50.112587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.592 [2024-11-20 10:46:50.112593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.592 [2024-11-20 10:46:50.112602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.592 [2024-11-20 10:46:50.113158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.592 [2024-11-20 10:46:50.180556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:49.592 [2024-11-20 10:46:50.180776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.592 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:49.851 [2024-11-20 10:46:50.426526] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:49.851 [2024-11-20 10:46:50.426730] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:49.851 [2024-11-20 10:46:50.426814] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:49.851 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:50.110 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6874c76-ee95-4a33-8d08-a3208855f5a3 -t 2000 00:29:50.110 [ 00:29:50.110 { 00:29:50.110 "name": "c6874c76-ee95-4a33-8d08-a3208855f5a3", 00:29:50.110 "aliases": [ 00:29:50.110 "lvs/lvol" 00:29:50.110 ], 00:29:50.110 "product_name": "Logical Volume", 00:29:50.110 "block_size": 4096, 00:29:50.110 "num_blocks": 38912, 00:29:50.110 "uuid": "c6874c76-ee95-4a33-8d08-a3208855f5a3", 00:29:50.110 "assigned_rate_limits": { 00:29:50.110 "rw_ios_per_sec": 0, 00:29:50.110 "rw_mbytes_per_sec": 0, 00:29:50.110 "r_mbytes_per_sec": 0, 00:29:50.110 "w_mbytes_per_sec": 0 00:29:50.110 }, 00:29:50.110 "claimed": false, 00:29:50.110 "zoned": false, 00:29:50.110 "supported_io_types": { 00:29:50.110 "read": true, 00:29:50.110 "write": true, 00:29:50.110 "unmap": true, 00:29:50.110 "flush": false, 00:29:50.110 "reset": true, 00:29:50.110 "nvme_admin": false, 00:29:50.110 "nvme_io": false, 00:29:50.110 "nvme_io_md": false, 00:29:50.110 "write_zeroes": true, 00:29:50.110 "zcopy": false, 00:29:50.110 "get_zone_info": false, 00:29:50.110 "zone_management": false, 00:29:50.110 "zone_append": false, 00:29:50.110 "compare": false, 00:29:50.110 "compare_and_write": false, 00:29:50.110 "abort": false, 00:29:50.110 "seek_hole": true, 00:29:50.110 "seek_data": true, 00:29:50.110 "copy": false, 00:29:50.110 "nvme_iov_md": false 00:29:50.110 }, 00:29:50.110 "driver_specific": { 00:29:50.110 "lvol": { 00:29:50.110 "lvol_store_uuid": "eb311e27-85cf-4e77-b161-cced995cbea0", 00:29:50.110 "base_bdev": "aio_bdev", 00:29:50.110 "thin_provision": false, 00:29:50.110 "num_allocated_clusters": 38, 00:29:50.111 "snapshot": false, 00:29:50.111 "clone": false, 00:29:50.111 "esnap_clone": false 00:29:50.111 } 00:29:50.111 } 00:29:50.111 } 00:29:50.111 ] 00:29:50.369 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:50.369 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:50.369 10:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:50.369 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:50.369 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:50.369 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:50.628 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:50.628 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:50.887 [2024-11-20 10:46:51.425624] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:50.887 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:51.146 request: 00:29:51.146 { 00:29:51.146 "uuid": "eb311e27-85cf-4e77-b161-cced995cbea0", 00:29:51.146 "method": "bdev_lvol_get_lvstores", 00:29:51.146 "req_id": 1 00:29:51.146 } 00:29:51.146 Got JSON-RPC error response 00:29:51.146 response: 00:29:51.146 { 00:29:51.146 "code": -19, 00:29:51.146 "message": "No such device" 00:29:51.146 } 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:51.146 aio_bdev 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:51.146 10:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:51.404 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6874c76-ee95-4a33-8d08-a3208855f5a3 -t 2000 00:29:51.663 [ 00:29:51.663 { 00:29:51.663 "name": "c6874c76-ee95-4a33-8d08-a3208855f5a3", 00:29:51.663 "aliases": [ 00:29:51.663 "lvs/lvol" 00:29:51.663 ], 00:29:51.663 "product_name": "Logical Volume", 00:29:51.663 "block_size": 4096, 00:29:51.663 "num_blocks": 38912, 00:29:51.663 "uuid": "c6874c76-ee95-4a33-8d08-a3208855f5a3", 00:29:51.663 "assigned_rate_limits": { 00:29:51.663 "rw_ios_per_sec": 0, 00:29:51.663 "rw_mbytes_per_sec": 0, 00:29:51.663 "r_mbytes_per_sec": 0, 00:29:51.663 "w_mbytes_per_sec": 0 00:29:51.663 }, 00:29:51.663 "claimed": false, 00:29:51.663 "zoned": false, 00:29:51.663 "supported_io_types": { 00:29:51.663 "read": true, 00:29:51.663 "write": true, 00:29:51.663 "unmap": true, 00:29:51.663 "flush": false, 00:29:51.663 "reset": true, 00:29:51.663 "nvme_admin": false, 00:29:51.663 "nvme_io": false, 00:29:51.663 "nvme_io_md": false, 00:29:51.663 "write_zeroes": true, 00:29:51.663 "zcopy": false, 00:29:51.663 "get_zone_info": false, 00:29:51.663 "zone_management": false, 00:29:51.663 "zone_append": false, 00:29:51.663 "compare": false, 00:29:51.663 "compare_and_write": false, 00:29:51.663 "abort": false, 00:29:51.663 "seek_hole": true, 00:29:51.663 "seek_data": true, 00:29:51.663 "copy": false, 00:29:51.663 "nvme_iov_md": false 00:29:51.663 }, 00:29:51.663 "driver_specific": { 00:29:51.663 "lvol": { 00:29:51.663 "lvol_store_uuid": "eb311e27-85cf-4e77-b161-cced995cbea0", 00:29:51.663 "base_bdev": "aio_bdev", 00:29:51.663 "thin_provision": false, 00:29:51.663 "num_allocated_clusters": 38, 00:29:51.663 "snapshot": false, 00:29:51.663 "clone": false, 00:29:51.663 "esnap_clone": false 00:29:51.663 } 00:29:51.663 } 00:29:51.663 } 00:29:51.663 ] 00:29:51.663 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:51.663 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:51.663 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:51.922 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:51.922 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:51.922 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:51.922 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:51.922 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6874c76-ee95-4a33-8d08-a3208855f5a3 00:29:52.181 10:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb311e27-85cf-4e77-b161-cced995cbea0 00:29:52.439 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:52.697 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.697 00:29:52.697 real 0m17.011s 00:29:52.697 user 0m34.376s 00:29:52.697 sys 0m3.908s 00:29:52.697 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.697 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:52.698 ************************************ 00:29:52.698 END TEST lvs_grow_dirty 00:29:52.698 ************************************ 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:52.698 nvmf_trace.0 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.698 rmmod nvme_tcp 00:29:52.698 rmmod nvme_fabrics 00:29:52.698 rmmod nvme_keyring 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3685060 ']' 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3685060 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3685060 ']' 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3685060 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.698 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3685060 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3685060' 00:29:52.957 killing process with pid 3685060 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3685060 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3685060 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.957 10:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.494 00:29:55.494 real 0m41.937s 00:29:55.494 user 0m52.086s 00:29:55.494 sys 0m10.349s 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:55.494 ************************************ 00:29:55.494 END TEST nvmf_lvs_grow 00:29:55.494 ************************************ 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:55.494 ************************************ 00:29:55.494 START TEST nvmf_bdev_io_wait 00:29:55.494 ************************************ 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:55.494 * Looking for test storage... 00:29:55.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.494 --rc genhtml_branch_coverage=1 00:29:55.494 --rc genhtml_function_coverage=1 00:29:55.494 --rc genhtml_legend=1 00:29:55.494 --rc geninfo_all_blocks=1 00:29:55.494 --rc geninfo_unexecuted_blocks=1 00:29:55.494 00:29:55.494 ' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.494 --rc genhtml_branch_coverage=1 00:29:55.494 --rc genhtml_function_coverage=1 00:29:55.494 --rc genhtml_legend=1 00:29:55.494 --rc geninfo_all_blocks=1 00:29:55.494 --rc geninfo_unexecuted_blocks=1 00:29:55.494 00:29:55.494 ' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.494 --rc genhtml_branch_coverage=1 00:29:55.494 --rc genhtml_function_coverage=1 00:29:55.494 --rc genhtml_legend=1 00:29:55.494 --rc geninfo_all_blocks=1 00:29:55.494 --rc geninfo_unexecuted_blocks=1 00:29:55.494 00:29:55.494 ' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.494 --rc genhtml_branch_coverage=1 00:29:55.494 --rc genhtml_function_coverage=1 00:29:55.494 --rc genhtml_legend=1 00:29:55.494 --rc geninfo_all_blocks=1 00:29:55.494 --rc geninfo_unexecuted_blocks=1 00:29:55.494 00:29:55.494 ' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.494 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.495 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.495 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.495 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.495 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:00.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:00.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.922 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:00.923 Found net devices under 0000:86:00.0: cvl_0_0 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:00.923 Found net devices under 0000:86:00.1: cvl_0_1 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.923 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.182 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.182 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.182 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.182 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.182 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:30:01.183 00:30:01.183 --- 10.0.0.2 ping statistics --- 00:30:01.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.183 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:30:01.183 00:30:01.183 --- 10.0.0.1 ping statistics --- 00:30:01.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.183 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3689122 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3689122 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3689122 ']' 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.183 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.442 [2024-11-20 10:47:01.944227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.442 [2024-11-20 10:47:01.945185] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:01.442 [2024-11-20 10:47:01.945218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.442 [2024-11-20 10:47:02.025944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.442 [2024-11-20 10:47:02.069489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.442 [2024-11-20 10:47:02.069528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.442 [2024-11-20 10:47:02.069535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.442 [2024-11-20 10:47:02.069541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.442 [2024-11-20 10:47:02.069546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.442 [2024-11-20 10:47:02.071136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.442 [2024-11-20 10:47:02.071266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.443 [2024-11-20 10:47:02.071376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.443 [2024-11-20 10:47:02.071376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.443 [2024-11-20 10:47:02.071636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.443 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.702 [2024-11-20 10:47:02.196486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:01.702 [2024-11-20 10:47:02.197254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:01.702 [2024-11-20 10:47:02.197481] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:01.702 [2024-11-20 10:47:02.197595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.702 [2024-11-20 10:47:02.207989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.702 Malloc0 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.702 [2024-11-20 10:47:02.276099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3689299 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3689302 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.702 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.702 { 00:30:01.702 "params": { 00:30:01.702 "name": "Nvme$subsystem", 00:30:01.702 "trtype": "$TEST_TRANSPORT", 00:30:01.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.702 "adrfam": "ipv4", 00:30:01.702 "trsvcid": "$NVMF_PORT", 00:30:01.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.703 "hdgst": ${hdgst:-false}, 00:30:01.703 "ddgst": ${ddgst:-false} 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 } 00:30:01.703 EOF 00:30:01.703 )") 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3689304 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.703 { 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme$subsystem", 00:30:01.703 "trtype": "$TEST_TRANSPORT", 00:30:01.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "$NVMF_PORT", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.703 "hdgst": ${hdgst:-false}, 00:30:01.703 "ddgst": ${ddgst:-false} 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 } 00:30:01.703 EOF 00:30:01.703 )") 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3689308 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.703 { 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme$subsystem", 00:30:01.703 "trtype": "$TEST_TRANSPORT", 00:30:01.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "$NVMF_PORT", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.703 "hdgst": ${hdgst:-false}, 00:30:01.703 "ddgst": ${ddgst:-false} 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 } 00:30:01.703 EOF 00:30:01.703 )") 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.703 { 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme$subsystem", 00:30:01.703 "trtype": "$TEST_TRANSPORT", 00:30:01.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "$NVMF_PORT", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.703 "hdgst": ${hdgst:-false}, 00:30:01.703 "ddgst": ${ddgst:-false} 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 } 00:30:01.703 EOF 00:30:01.703 )") 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3689299 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme1", 00:30:01.703 "trtype": "tcp", 00:30:01.703 "traddr": "10.0.0.2", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "4420", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.703 "hdgst": false, 00:30:01.703 "ddgst": false 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 }' 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme1", 00:30:01.703 "trtype": "tcp", 00:30:01.703 "traddr": "10.0.0.2", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "4420", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.703 "hdgst": false, 00:30:01.703 "ddgst": false 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 }' 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme1", 00:30:01.703 "trtype": "tcp", 00:30:01.703 "traddr": "10.0.0.2", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "4420", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.703 "hdgst": false, 00:30:01.703 "ddgst": false 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 }' 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:01.703 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.703 "params": { 00:30:01.703 "name": "Nvme1", 00:30:01.703 "trtype": "tcp", 00:30:01.703 "traddr": "10.0.0.2", 00:30:01.703 "adrfam": "ipv4", 00:30:01.703 "trsvcid": "4420", 00:30:01.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.703 "hdgst": false, 00:30:01.703 "ddgst": false 00:30:01.703 }, 00:30:01.703 "method": "bdev_nvme_attach_controller" 00:30:01.703 }' 00:30:01.703 [2024-11-20 10:47:02.327724] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:01.703 [2024-11-20 10:47:02.327771] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:01.703 [2024-11-20 10:47:02.329326] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:01.703 [2024-11-20 10:47:02.329378] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:01.703 [2024-11-20 10:47:02.329799] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:01.703 [2024-11-20 10:47:02.329844] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:01.703 [2024-11-20 10:47:02.330444] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:01.703 [2024-11-20 10:47:02.330487] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:01.961 [2024-11-20 10:47:02.491693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.961 [2024-11-20 10:47:02.526801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:01.961 [2024-11-20 10:47:02.588906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.961 [2024-11-20 10:47:02.631930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.961 [2024-11-20 10:47:02.684723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.219 [2024-11-20 10:47:02.741260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.219 [2024-11-20 10:47:02.744611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:02.219 [2024-11-20 10:47:02.784245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:02.219 Running I/O for 1 seconds... 00:30:02.219 Running I/O for 1 seconds... 00:30:02.477 Running I/O for 1 seconds... 00:30:02.477 Running I/O for 1 seconds... 00:30:03.409 8774.00 IOPS, 34.27 MiB/s 00:30:03.409 Latency(us) 00:30:03.409 [2024-11-20T09:47:04.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.409 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:03.409 Nvme1n1 : 1.02 8773.55 34.27 0.00 0.00 14504.37 3490.50 20059.71 00:30:03.409 [2024-11-20T09:47:04.140Z] =================================================================================================================== 00:30:03.409 [2024-11-20T09:47:04.140Z] Total : 8773.55 34.27 0.00 0.00 14504.37 3490.50 20059.71 00:30:03.409 11606.00 IOPS, 45.34 MiB/s 00:30:03.409 Latency(us) 00:30:03.409 [2024-11-20T09:47:04.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.409 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:03.409 Nvme1n1 : 1.01 11647.54 45.50 0.00 0.00 10946.84 4473.54 14816.83 00:30:03.409 [2024-11-20T09:47:04.140Z] =================================================================================================================== 00:30:03.409 [2024-11-20T09:47:04.140Z] Total : 11647.54 45.50 0.00 0.00 10946.84 4473.54 14816.83 00:30:03.409 8254.00 IOPS, 32.24 MiB/s 00:30:03.409 Latency(us) 00:30:03.409 [2024-11-20T09:47:04.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.409 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:03.409 Nvme1n1 : 1.01 8383.35 32.75 0.00 0.00 15236.75 3205.57 31001.38 00:30:03.409 [2024-11-20T09:47:04.140Z] =================================================================================================================== 00:30:03.409 [2024-11-20T09:47:04.140Z] Total : 8383.35 32.75 0.00 0.00 15236.75 3205.57 31001.38 00:30:03.409 237144.00 IOPS, 926.34 MiB/s 00:30:03.409 Latency(us) 00:30:03.409 [2024-11-20T09:47:04.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.409 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:03.409 Nvme1n1 : 1.00 236773.40 924.90 0.00 0.00 538.20 233.29 1538.67 00:30:03.409 [2024-11-20T09:47:04.140Z] =================================================================================================================== 00:30:03.409 [2024-11-20T09:47:04.140Z] Total : 236773.40 924.90 0.00 0.00 538.20 233.29 1538.67 00:30:03.409 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3689302 00:30:03.409 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3689304 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3689308 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.667 rmmod nvme_tcp 00:30:03.667 rmmod nvme_fabrics 00:30:03.667 rmmod nvme_keyring 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3689122 ']' 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3689122 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3689122 ']' 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3689122 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3689122 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3689122' 00:30:03.667 killing process with pid 3689122 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3689122 00:30:03.667 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3689122 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.926 10:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.830 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.830 00:30:05.830 real 0m10.700s 00:30:05.830 user 0m15.036s 00:30:05.830 sys 0m6.406s 00:30:05.830 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.830 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.830 ************************************ 00:30:05.830 END TEST nvmf_bdev_io_wait 00:30:05.830 ************************************ 00:30:05.831 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:05.831 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:05.831 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.831 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.831 ************************************ 00:30:05.831 START TEST nvmf_queue_depth 00:30:05.831 ************************************ 00:30:05.831 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:06.090 * Looking for test storage... 00:30:06.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:06.090 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.091 --rc genhtml_branch_coverage=1 00:30:06.091 --rc genhtml_function_coverage=1 00:30:06.091 --rc genhtml_legend=1 00:30:06.091 --rc geninfo_all_blocks=1 00:30:06.091 --rc geninfo_unexecuted_blocks=1 00:30:06.091 00:30:06.091 ' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.091 --rc genhtml_branch_coverage=1 00:30:06.091 --rc genhtml_function_coverage=1 00:30:06.091 --rc genhtml_legend=1 00:30:06.091 --rc geninfo_all_blocks=1 00:30:06.091 --rc geninfo_unexecuted_blocks=1 00:30:06.091 00:30:06.091 ' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.091 --rc genhtml_branch_coverage=1 00:30:06.091 --rc genhtml_function_coverage=1 00:30:06.091 --rc genhtml_legend=1 00:30:06.091 --rc geninfo_all_blocks=1 00:30:06.091 --rc geninfo_unexecuted_blocks=1 00:30:06.091 00:30:06.091 ' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.091 --rc genhtml_branch_coverage=1 00:30:06.091 --rc genhtml_function_coverage=1 00:30:06.091 --rc genhtml_legend=1 00:30:06.091 --rc geninfo_all_blocks=1 00:30:06.091 --rc geninfo_unexecuted_blocks=1 00:30:06.091 00:30:06.091 ' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.091 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.092 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.092 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.092 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.678 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.678 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.678 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.678 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.678 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:12.679 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:12.679 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:12.679 Found net devices under 0000:86:00.0: cvl_0_0 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:12.679 Found net devices under 0000:86:00.1: cvl_0_1 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.679 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:30:12.679 00:30:12.679 --- 10.0.0.2 ping statistics --- 00:30:12.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.680 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:30:12.680 00:30:12.680 --- 10.0.0.1 ping statistics --- 00:30:12.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.680 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3693140 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3693140 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3693140 ']' 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 [2024-11-20 10:47:12.737545] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.680 [2024-11-20 10:47:12.738467] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:12.680 [2024-11-20 10:47:12.738499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.680 [2024-11-20 10:47:12.819051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.680 [2024-11-20 10:47:12.860379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.680 [2024-11-20 10:47:12.860414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.680 [2024-11-20 10:47:12.860421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.680 [2024-11-20 10:47:12.860427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.680 [2024-11-20 10:47:12.860433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.680 [2024-11-20 10:47:12.860983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.680 [2024-11-20 10:47:12.926787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.680 [2024-11-20 10:47:12.927002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.680 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 [2024-11-20 10:47:12.997641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 Malloc0 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 [2024-11-20 10:47:13.073754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3693168 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3693168 /var/tmp/bdevperf.sock 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3693168 ']' 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:12.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.680 [2024-11-20 10:47:13.123581] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:12.680 [2024-11-20 10:47:13.123623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693168 ] 00:30:12.680 [2024-11-20 10:47:13.198150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.680 [2024-11-20 10:47:13.240468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.680 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.938 NVMe0n1 00:30:12.938 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.938 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.938 Running I/O for 10 seconds... 00:30:14.803 11666.00 IOPS, 45.57 MiB/s [2024-11-20T09:47:16.905Z] 11799.00 IOPS, 46.09 MiB/s [2024-11-20T09:47:17.838Z] 11954.00 IOPS, 46.70 MiB/s [2024-11-20T09:47:18.770Z] 12034.00 IOPS, 47.01 MiB/s [2024-11-20T09:47:19.701Z] 12088.40 IOPS, 47.22 MiB/s [2024-11-20T09:47:20.663Z] 12112.17 IOPS, 47.31 MiB/s [2024-11-20T09:47:21.597Z] 12135.14 IOPS, 47.40 MiB/s [2024-11-20T09:47:22.969Z] 12151.75 IOPS, 47.47 MiB/s [2024-11-20T09:47:23.901Z] 12171.78 IOPS, 47.55 MiB/s [2024-11-20T09:47:23.901Z] 12184.00 IOPS, 47.59 MiB/s 00:30:23.170 Latency(us) 00:30:23.170 [2024-11-20T09:47:23.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.170 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:23.171 Verification LBA range: start 0x0 length 0x4000 00:30:23.171 NVMe0n1 : 10.06 12214.14 47.71 0.00 0.00 83573.95 19831.76 52656.75 00:30:23.171 [2024-11-20T09:47:23.902Z] =================================================================================================================== 00:30:23.171 [2024-11-20T09:47:23.902Z] Total : 12214.14 47.71 0.00 0.00 83573.95 19831.76 52656.75 00:30:23.171 { 00:30:23.171 "results": [ 00:30:23.171 { 00:30:23.171 "job": "NVMe0n1", 00:30:23.171 "core_mask": "0x1", 00:30:23.171 "workload": "verify", 00:30:23.171 "status": "finished", 00:30:23.171 "verify_range": { 00:30:23.171 "start": 0, 00:30:23.171 "length": 16384 00:30:23.171 }, 00:30:23.171 "queue_depth": 1024, 00:30:23.171 "io_size": 4096, 00:30:23.171 "runtime": 10.059157, 00:30:23.171 "iops": 12214.144783703048, 00:30:23.171 "mibps": 47.71150306134003, 00:30:23.171 "io_failed": 0, 00:30:23.171 "io_timeout": 0, 00:30:23.171 "avg_latency_us": 83573.94567635051, 00:30:23.171 "min_latency_us": 19831.76347826087, 00:30:23.171 "max_latency_us": 52656.751304347825 00:30:23.171 } 00:30:23.171 ], 00:30:23.171 "core_count": 1 00:30:23.171 } 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3693168 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3693168 ']' 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3693168 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3693168 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3693168' 00:30:23.171 killing process with pid 3693168 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3693168 00:30:23.171 Received shutdown signal, test time was about 10.000000 seconds 00:30:23.171 00:30:23.171 Latency(us) 00:30:23.171 [2024-11-20T09:47:23.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.171 [2024-11-20T09:47:23.902Z] =================================================================================================================== 00:30:23.171 [2024-11-20T09:47:23.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3693168 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.171 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.171 rmmod nvme_tcp 00:30:23.171 rmmod nvme_fabrics 00:30:23.171 rmmod nvme_keyring 00:30:23.429 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.429 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:23.429 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3693140 ']' 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3693140 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3693140 ']' 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3693140 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3693140 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3693140' 00:30:23.430 killing process with pid 3693140 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3693140 00:30:23.430 10:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3693140 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.430 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.963 00:30:25.963 real 0m19.669s 00:30:25.963 user 0m22.688s 00:30:25.963 sys 0m6.262s 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:25.963 ************************************ 00:30:25.963 END TEST nvmf_queue_depth 00:30:25.963 ************************************ 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:25.963 ************************************ 00:30:25.963 START TEST nvmf_target_multipath 00:30:25.963 ************************************ 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:25.963 * Looking for test storage... 00:30:25.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.963 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.964 --rc genhtml_branch_coverage=1 00:30:25.964 --rc genhtml_function_coverage=1 00:30:25.964 --rc genhtml_legend=1 00:30:25.964 --rc geninfo_all_blocks=1 00:30:25.964 --rc geninfo_unexecuted_blocks=1 00:30:25.964 00:30:25.964 ' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.964 --rc genhtml_branch_coverage=1 00:30:25.964 --rc genhtml_function_coverage=1 00:30:25.964 --rc genhtml_legend=1 00:30:25.964 --rc geninfo_all_blocks=1 00:30:25.964 --rc geninfo_unexecuted_blocks=1 00:30:25.964 00:30:25.964 ' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.964 --rc genhtml_branch_coverage=1 00:30:25.964 --rc genhtml_function_coverage=1 00:30:25.964 --rc genhtml_legend=1 00:30:25.964 --rc geninfo_all_blocks=1 00:30:25.964 --rc geninfo_unexecuted_blocks=1 00:30:25.964 00:30:25.964 ' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.964 --rc genhtml_branch_coverage=1 00:30:25.964 --rc genhtml_function_coverage=1 00:30:25.964 --rc genhtml_legend=1 00:30:25.964 --rc geninfo_all_blocks=1 00:30:25.964 --rc geninfo_unexecuted_blocks=1 00:30:25.964 00:30:25.964 ' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:25.964 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.965 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:32.532 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:32.533 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:32.533 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:32.533 Found net devices under 0000:86:00.0: cvl_0_0 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:32.533 Found net devices under 0000:86:00.1: cvl_0_1 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.533 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:30:32.534 00:30:32.534 --- 10.0.0.2 ping statistics --- 00:30:32.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.534 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:30:32.534 00:30:32.534 --- 10.0.0.1 ping statistics --- 00:30:32.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.534 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:32.534 only one NIC for nvmf test 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.534 rmmod nvme_tcp 00:30:32.534 rmmod nvme_fabrics 00:30:32.534 rmmod nvme_keyring 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.534 10:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:33.907 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.908 00:30:33.908 real 0m8.257s 00:30:33.908 user 0m1.796s 00:30:33.908 sys 0m4.488s 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:33.908 ************************************ 00:30:33.908 END TEST nvmf_target_multipath 00:30:33.908 ************************************ 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:33.908 ************************************ 00:30:33.908 START TEST nvmf_zcopy 00:30:33.908 ************************************ 00:30:33.908 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:34.167 * Looking for test storage... 00:30:34.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.167 --rc genhtml_branch_coverage=1 00:30:34.167 --rc genhtml_function_coverage=1 00:30:34.167 --rc genhtml_legend=1 00:30:34.167 --rc geninfo_all_blocks=1 00:30:34.167 --rc geninfo_unexecuted_blocks=1 00:30:34.167 00:30:34.167 ' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.167 --rc genhtml_branch_coverage=1 00:30:34.167 --rc genhtml_function_coverage=1 00:30:34.167 --rc genhtml_legend=1 00:30:34.167 --rc geninfo_all_blocks=1 00:30:34.167 --rc geninfo_unexecuted_blocks=1 00:30:34.167 00:30:34.167 ' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.167 --rc genhtml_branch_coverage=1 00:30:34.167 --rc genhtml_function_coverage=1 00:30:34.167 --rc genhtml_legend=1 00:30:34.167 --rc geninfo_all_blocks=1 00:30:34.167 --rc geninfo_unexecuted_blocks=1 00:30:34.167 00:30:34.167 ' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.167 --rc genhtml_branch_coverage=1 00:30:34.167 --rc genhtml_function_coverage=1 00:30:34.167 --rc genhtml_legend=1 00:30:34.167 --rc geninfo_all_blocks=1 00:30:34.167 --rc geninfo_unexecuted_blocks=1 00:30:34.167 00:30:34.167 ' 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:34.167 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:34.168 10:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:40.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:40.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:40.735 Found net devices under 0000:86:00.0: cvl_0_0 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:40.735 Found net devices under 0000:86:00.1: cvl_0_1 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:40.735 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:40.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:30:40.736 00:30:40.736 --- 10.0.0.2 ping statistics --- 00:30:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.736 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:30:40.736 00:30:40.736 --- 10.0.0.1 ping statistics --- 00:30:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.736 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3701809 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3701809 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3701809 ']' 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.736 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 [2024-11-20 10:47:40.806164] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:40.736 [2024-11-20 10:47:40.807121] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:40.736 [2024-11-20 10:47:40.807155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.736 [2024-11-20 10:47:40.886932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.736 [2024-11-20 10:47:40.928494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.736 [2024-11-20 10:47:40.928531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.736 [2024-11-20 10:47:40.928539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.736 [2024-11-20 10:47:40.928545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.736 [2024-11-20 10:47:40.928550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.736 [2024-11-20 10:47:40.929090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.736 [2024-11-20 10:47:40.996838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:40.736 [2024-11-20 10:47:40.997067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 [2024-11-20 10:47:41.065749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 [2024-11-20 10:47:41.094017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 malloc0 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:40.736 { 00:30:40.736 "params": { 00:30:40.736 "name": "Nvme$subsystem", 00:30:40.736 "trtype": "$TEST_TRANSPORT", 00:30:40.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.736 "adrfam": "ipv4", 00:30:40.736 "trsvcid": "$NVMF_PORT", 00:30:40.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.736 "hdgst": ${hdgst:-false}, 00:30:40.736 "ddgst": ${ddgst:-false} 00:30:40.736 }, 00:30:40.736 "method": "bdev_nvme_attach_controller" 00:30:40.736 } 00:30:40.736 EOF 00:30:40.736 )") 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:40.736 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:40.737 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:40.737 "params": { 00:30:40.737 "name": "Nvme1", 00:30:40.737 "trtype": "tcp", 00:30:40.737 "traddr": "10.0.0.2", 00:30:40.737 "adrfam": "ipv4", 00:30:40.737 "trsvcid": "4420", 00:30:40.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.737 "hdgst": false, 00:30:40.737 "ddgst": false 00:30:40.737 }, 00:30:40.737 "method": "bdev_nvme_attach_controller" 00:30:40.737 }' 00:30:40.737 [2024-11-20 10:47:41.188107] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:40.737 [2024-11-20 10:47:41.188155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701836 ] 00:30:40.737 [2024-11-20 10:47:41.262081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.737 [2024-11-20 10:47:41.303795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.995 Running I/O for 10 seconds... 00:30:42.866 8197.00 IOPS, 64.04 MiB/s [2024-11-20T09:47:44.535Z] 8228.00 IOPS, 64.28 MiB/s [2024-11-20T09:47:45.913Z] 8251.00 IOPS, 64.46 MiB/s [2024-11-20T09:47:46.850Z] 8260.25 IOPS, 64.53 MiB/s [2024-11-20T09:47:47.787Z] 8269.00 IOPS, 64.60 MiB/s [2024-11-20T09:47:48.724Z] 8274.17 IOPS, 64.64 MiB/s [2024-11-20T09:47:49.668Z] 8283.43 IOPS, 64.71 MiB/s [2024-11-20T09:47:50.604Z] 8286.25 IOPS, 64.74 MiB/s [2024-11-20T09:47:51.541Z] 8276.00 IOPS, 64.66 MiB/s [2024-11-20T09:47:51.541Z] 8269.10 IOPS, 64.60 MiB/s 00:30:50.810 Latency(us) 00:30:50.810 [2024-11-20T09:47:51.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.810 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:50.810 Verification LBA range: start 0x0 length 0x1000 00:30:50.810 Nvme1n1 : 10.01 8273.18 64.63 0.00 0.00 15428.71 897.56 21769.35 00:30:50.810 [2024-11-20T09:47:51.541Z] =================================================================================================================== 00:30:50.810 [2024-11-20T09:47:51.541Z] Total : 8273.18 64.63 0.00 0.00 15428.71 897.56 21769.35 00:30:51.069 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3703558 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.070 { 00:30:51.070 "params": { 00:30:51.070 "name": "Nvme$subsystem", 00:30:51.070 "trtype": "$TEST_TRANSPORT", 00:30:51.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.070 "adrfam": "ipv4", 00:30:51.070 "trsvcid": "$NVMF_PORT", 00:30:51.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.070 "hdgst": ${hdgst:-false}, 00:30:51.070 "ddgst": ${ddgst:-false} 00:30:51.070 }, 00:30:51.070 "method": "bdev_nvme_attach_controller" 00:30:51.070 } 00:30:51.070 EOF 00:30:51.070 )") 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:51.070 [2024-11-20 10:47:51.697443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.697474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:51.070 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.070 "params": { 00:30:51.070 "name": "Nvme1", 00:30:51.070 "trtype": "tcp", 00:30:51.070 "traddr": "10.0.0.2", 00:30:51.070 "adrfam": "ipv4", 00:30:51.070 "trsvcid": "4420", 00:30:51.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.070 "hdgst": false, 00:30:51.070 "ddgst": false 00:30:51.070 }, 00:30:51.070 "method": "bdev_nvme_attach_controller" 00:30:51.070 }' 00:30:51.070 [2024-11-20 10:47:51.709408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.709422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.721403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.721414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.733402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.733414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.736959] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:30:51.070 [2024-11-20 10:47:51.737009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703558 ] 00:30:51.070 [2024-11-20 10:47:51.745403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.745415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.757400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.757411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.769404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.769415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.781401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.781412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.070 [2024-11-20 10:47:51.793401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.070 [2024-11-20 10:47:51.793411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.328 [2024-11-20 10:47:51.805407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.328 [2024-11-20 10:47:51.805421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.328 [2024-11-20 10:47:51.814306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.328 [2024-11-20 10:47:51.817402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.328 [2024-11-20 10:47:51.817414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.829403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.829419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.841404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.841414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.853404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.853418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.855118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.329 [2024-11-20 10:47:51.865409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.865423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.877410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.877433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.889412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.889429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.901408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.901425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.913408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.913423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.925403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.925415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.937408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.937426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.949409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.949427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.961414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.961431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.973408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.973423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:51.985408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:51.985423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:52.035249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:52.035268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 [2024-11-20 10:47:52.045405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.329 [2024-11-20 10:47:52.045418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.329 Running I/O for 5 seconds... 00:30:51.587 [2024-11-20 10:47:52.059267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.587 [2024-11-20 10:47:52.059289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.074873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.074893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.090238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.090258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.105599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.105619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.117624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.117644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.131355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.131374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.146501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.146520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.161711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.161729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.178040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.178059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.189769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.189787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.203098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.203116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.218681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.218699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.234532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.234550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.249849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.249867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.262207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.262225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.275438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.275457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.291411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.291430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.588 [2024-11-20 10:47:52.307275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.588 [2024-11-20 10:47:52.307295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.322872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.322891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.338312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.338330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.353484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.353503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.364807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.364826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.379612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.379631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.394995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.395014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.410811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.410831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.426381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.426400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.442130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.442149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.457641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.457660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.469538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.469557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.483412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.483431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.499288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.499307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.514439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.514458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.529862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.529880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.541602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.541620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.554785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.554809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.847 [2024-11-20 10:47:52.570324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.847 [2024-11-20 10:47:52.570343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.586047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.586066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.601769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.601788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.617439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.617459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.631679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.631698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.647357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.647376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.662683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.662703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.677725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.677744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.694183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.694201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.705788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.705807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.719553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.719572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.735185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.735204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.750800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.750819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.766324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.766343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.781731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.781750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.794401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.794420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.809919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.809937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.107 [2024-11-20 10:47:52.822241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.107 [2024-11-20 10:47:52.822260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.838235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.838260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.853082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.853101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.866207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.866227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.879325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.879346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.895042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.895062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.910514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.910535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.926383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.926402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.941764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.941784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.957043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.957063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.971249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.971268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:52.986617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:52.986636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:53.002075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.002094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:53.017739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.017758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:53.030003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.030031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:53.043511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.043531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 15880.00 IOPS, 124.06 MiB/s [2024-11-20T09:47:53.097Z] [2024-11-20 10:47:53.058655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.058674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:53.073632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.073653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.366 [2024-11-20 10:47:53.086340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.366 [2024-11-20 10:47:53.086360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.101808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.101828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.113271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.113295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.127555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.127574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.142819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.142838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.158179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.158198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.173525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.173545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.185213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.185232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.199600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.199620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.215277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.215296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.230620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.230639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.245923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.245942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.256968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.256987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.272086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.272105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.287395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.287414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.303165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.303184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.318573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.318591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.334025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.334044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.626 [2024-11-20 10:47:53.350155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.626 [2024-11-20 10:47:53.350177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.365923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.365941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.379260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.379278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.394854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.394872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.410665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.410684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.426242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.426260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.441985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.442004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.452702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.452721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.467506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.467525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.482916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.482934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.499059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.499078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.514616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.514634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.529807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.529825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.541969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.541987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.555224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.555242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.571056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.571075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.586409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.586428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.888 [2024-11-20 10:47:53.602151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.888 [2024-11-20 10:47:53.602169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.617902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.617922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.633973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.633993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.645895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.645913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.659360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.659379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.674865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.674883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.690568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.690587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.706048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.706066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.721215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.721234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.735350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.735370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.751353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.751372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.766766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.766786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.782128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.782147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.797776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.797795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.814162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.814181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.830005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.830023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.845674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.845693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.857767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.857785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.873963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.873982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.176 [2024-11-20 10:47:53.889882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.176 [2024-11-20 10:47:53.889900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.905879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.905899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.921759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.921779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.933624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.933642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.949639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.949658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.963234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.963252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.979072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.979091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:53.994954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:53.994989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.462 [2024-11-20 10:47:54.010664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.462 [2024-11-20 10:47:54.010683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.026104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.026123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.041590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.041608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.053201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.053220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 15861.50 IOPS, 123.92 MiB/s [2024-11-20T09:47:54.194Z] [2024-11-20 10:47:54.067972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.067991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.083763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.083781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.099027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.099047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.114603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.114622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.130419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.130437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.146059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.146078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.157061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.157080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.463 [2024-11-20 10:47:54.171837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.463 [2024-11-20 10:47:54.171856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.187909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.187929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.202443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.202463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.218082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.218102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.230244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.230267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.245657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.245675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.257570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.257589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.271562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.271582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.287360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.287379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.302569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.302589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.317987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.318006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.333459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.333479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.344784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.344804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.359524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.359544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.375259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.375279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.390248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.390268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.401287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.401306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.415275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.415294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.430727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.430745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.445898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.445916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.461353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.461372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.746 [2024-11-20 10:47:54.472257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.746 [2024-11-20 10:47:54.472277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.487591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.487611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.503154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.503177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.518699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.518718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.533873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.533892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.545252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.545272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.558874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.558895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.574318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.574338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.589600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.589619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.603378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.603397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.619233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.619252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.635101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.635121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.650790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.650809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.665880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.665900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.681196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.681218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.693014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.693034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.707876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.707896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.006 [2024-11-20 10:47:54.723613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.006 [2024-11-20 10:47:54.723633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.265 [2024-11-20 10:47:54.738851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.265 [2024-11-20 10:47:54.738871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.265 [2024-11-20 10:47:54.754445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.265 [2024-11-20 10:47:54.754464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.265 [2024-11-20 10:47:54.769851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.265 [2024-11-20 10:47:54.769869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.265 [2024-11-20 10:47:54.780459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.265 [2024-11-20 10:47:54.780483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.265 [2024-11-20 10:47:54.795526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.265 [2024-11-20 10:47:54.795545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.811085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.811104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.826693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.826711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.841880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.841898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.857074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.857094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.869944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.869975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.883389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.883408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.898962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.898981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.914306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.914325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.929581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.929600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.940112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.940132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.955702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.955721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.971497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.971515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.266 [2024-11-20 10:47:54.986883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.266 [2024-11-20 10:47:54.986902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.002549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.002568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.018051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.018071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.033851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.033869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.045789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.045807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 15882.33 IOPS, 124.08 MiB/s [2024-11-20T09:47:55.256Z] [2024-11-20 10:47:55.058823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.058842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.074285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.074304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.089826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.089846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.100603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.100622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.115548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.115569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.131501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.131521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.147065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.147084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.162504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.162523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.177826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.177844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.194231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.194250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.210514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.210533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.226086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.226105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.241433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.241451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.525 [2024-11-20 10:47:55.253574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.525 [2024-11-20 10:47:55.253594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.267187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.267206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.283226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.283245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.298954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.298989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.314105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.314124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.329968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.329987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.345225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.345244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.356970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.356988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.371882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.371901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.387054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.387073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.402779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.402798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.418085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.418103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.430151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.430170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.443143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.443162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.458896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.458915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.784 [2024-11-20 10:47:55.474254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.784 [2024-11-20 10:47:55.474274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.785 [2024-11-20 10:47:55.490120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.785 [2024-11-20 10:47:55.490139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.785 [2024-11-20 10:47:55.505481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.785 [2024-11-20 10:47:55.505500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.518569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.518588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.534244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.534263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.549597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.549617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.563092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.563110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.578888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.578906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.594633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.594652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.610167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.610186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.625967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.625985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.641723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.641742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.654849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.654868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.670454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.670473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.686520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.686539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.701850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.701869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.717235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.717254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.731535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.731554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.747248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.747268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.043 [2024-11-20 10:47:55.762610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.043 [2024-11-20 10:47:55.762629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.778138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.778159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.793279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.793300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.805002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.805022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.819598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.819618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.835417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.835436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.850869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.850891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.866332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.866351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.878167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.878186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.889829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.889851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.903310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.903329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.919007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.919027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.934348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.934367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.949781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.949800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.961432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.961452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.975358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.975377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:55.991088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:55.991108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:56.006507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:56.006528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:56.017429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:56.017448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.302 [2024-11-20 10:47:56.031328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.302 [2024-11-20 10:47:56.031348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.047108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.047128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 15874.75 IOPS, 124.02 MiB/s [2024-11-20T09:47:56.292Z] [2024-11-20 10:47:56.062234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.062253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.077835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.077854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.089709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.089728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.103466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.103485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.119048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.119068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.134593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.134613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.150347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.150367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.165868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.165893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.178015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.178035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.191170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.191191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.206941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.206967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.222434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.222453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.237725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.237744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.253551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.253570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.266879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.266898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.561 [2024-11-20 10:47:56.282718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.561 [2024-11-20 10:47:56.282737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.298018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.298038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.309692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.309711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.323669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.323688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.339075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.339093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.354820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.354838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.370036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.370055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.385871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.385889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.398566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.398585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.409587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.409606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.422943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.422966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.438816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.438840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.454279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.454297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.466095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.466113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.479018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.479037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.494727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.494746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.510237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.510256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.525854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.525874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.820 [2024-11-20 10:47:56.537968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.820 [2024-11-20 10:47:56.537986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.549863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.549883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.561700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.561718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.575656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.575675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.591823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.591842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.606883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.606902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.623036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.623055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.638517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.638535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.653653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.653672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.666977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.666996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.682482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.682500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.697822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.697840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.713634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.713653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.725986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.726005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.739050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.739069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.754462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.754480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.769804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.769822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.785215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.785235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.079 [2024-11-20 10:47:56.797172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.079 [2024-11-20 10:47:56.797191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.811315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.811335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.826964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.826983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.842371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.842390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.857961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.857979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.873503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.873522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.885558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.885578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.899786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.899806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.915578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.915597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.931407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.931426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.947209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.947228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.962737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.962755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.977955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.977973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:56.989215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:56.989234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:57.002868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:57.002887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:57.018383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:57.018402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:57.034068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:57.034087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.338 [2024-11-20 10:47:57.044493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.338 [2024-11-20 10:47:57.044512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.339 [2024-11-20 10:47:57.059846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.339 [2024-11-20 10:47:57.059865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.596 15871.60 IOPS, 124.00 MiB/s 00:30:56.596 Latency(us) 00:30:56.596 [2024-11-20T09:47:57.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.597 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:56.597 Nvme1n1 : 5.01 15879.88 124.06 0.00 0.00 8054.01 2065.81 13449.13 00:30:56.597 [2024-11-20T09:47:57.328Z] =================================================================================================================== 00:30:56.597 [2024-11-20T09:47:57.328Z] Total : 15879.88 124.06 0.00 0.00 8054.01 2065.81 13449.13 00:30:56.597 [2024-11-20 10:47:57.069411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.069429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.081405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.081421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.093418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.093433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.105417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.105435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.117411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.117426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.129411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.129425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.141407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.141424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.153408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.153423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.165404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.165418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.177402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.177422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.189402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.189414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.201405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.201421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.213403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.213414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 [2024-11-20 10:47:57.225402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.597 [2024-11-20 10:47:57.225413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3703558) - No such process 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3703558 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.597 delay0 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.597 10:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:56.855 [2024-11-20 10:47:57.334388] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:03.423 Initializing NVMe Controllers 00:31:03.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.423 Initialization complete. Launching workers. 00:31:03.423 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 881 00:31:03.423 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1171, failed to submit 30 00:31:03.423 success 1020, unsuccessful 151, failed 0 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.423 rmmod nvme_tcp 00:31:03.423 rmmod nvme_fabrics 00:31:03.423 rmmod nvme_keyring 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3701809 ']' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3701809 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3701809 ']' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3701809 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701809 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701809' 00:31:03.423 killing process with pid 3701809 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3701809 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3701809 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.423 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.330 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.330 00:31:05.330 real 0m31.406s 00:31:05.330 user 0m40.757s 00:31:05.330 sys 0m12.064s 00:31:05.330 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.330 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.330 ************************************ 00:31:05.330 END TEST nvmf_zcopy 00:31:05.330 ************************************ 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:05.590 ************************************ 00:31:05.590 START TEST nvmf_nmic 00:31:05.590 ************************************ 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:05.590 * Looking for test storage... 00:31:05.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.590 --rc genhtml_branch_coverage=1 00:31:05.590 --rc genhtml_function_coverage=1 00:31:05.590 --rc genhtml_legend=1 00:31:05.590 --rc geninfo_all_blocks=1 00:31:05.590 --rc geninfo_unexecuted_blocks=1 00:31:05.590 00:31:05.590 ' 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.590 --rc genhtml_branch_coverage=1 00:31:05.590 --rc genhtml_function_coverage=1 00:31:05.590 --rc genhtml_legend=1 00:31:05.590 --rc geninfo_all_blocks=1 00:31:05.590 --rc geninfo_unexecuted_blocks=1 00:31:05.590 00:31:05.590 ' 00:31:05.590 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.590 --rc genhtml_branch_coverage=1 00:31:05.591 --rc genhtml_function_coverage=1 00:31:05.591 --rc genhtml_legend=1 00:31:05.591 --rc geninfo_all_blocks=1 00:31:05.591 --rc geninfo_unexecuted_blocks=1 00:31:05.591 00:31:05.591 ' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.591 --rc genhtml_branch_coverage=1 00:31:05.591 --rc genhtml_function_coverage=1 00:31:05.591 --rc genhtml_legend=1 00:31:05.591 --rc geninfo_all_blocks=1 00:31:05.591 --rc geninfo_unexecuted_blocks=1 00:31:05.591 00:31:05.591 ' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:05.591 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:05.850 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:05.850 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:12.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:12.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.422 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:12.423 Found net devices under 0000:86:00.0: cvl_0_0 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:12.423 Found net devices under 0000:86:00.1: cvl_0_1 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.423 10:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:31:12.423 00:31:12.423 --- 10.0.0.2 ping statistics --- 00:31:12.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.423 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:31:12.423 00:31:12.423 --- 10.0.0.1 ping statistics --- 00:31:12.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.423 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3709516 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3709516 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3709516 ']' 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.423 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.423 [2024-11-20 10:48:12.269555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:12.423 [2024-11-20 10:48:12.270565] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:31:12.423 [2024-11-20 10:48:12.270604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.423 [2024-11-20 10:48:12.351778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.423 [2024-11-20 10:48:12.396142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.423 [2024-11-20 10:48:12.396181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.423 [2024-11-20 10:48:12.396189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.423 [2024-11-20 10:48:12.396195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.423 [2024-11-20 10:48:12.396200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.423 [2024-11-20 10:48:12.397768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.423 [2024-11-20 10:48:12.397882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.423 [2024-11-20 10:48:12.398016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.423 [2024-11-20 10:48:12.398017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.423 [2024-11-20 10:48:12.466413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:12.423 [2024-11-20 10:48:12.467371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:12.423 [2024-11-20 10:48:12.467500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:12.424 [2024-11-20 10:48:12.467837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:12.424 [2024-11-20 10:48:12.467895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 [2024-11-20 10:48:12.534791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 Malloc0 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 [2024-11-20 10:48:12.618999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:12.424 test case1: single bdev can't be used in multiple subsystems 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 [2024-11-20 10:48:12.650485] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:12.424 [2024-11-20 10:48:12.650507] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:12.424 [2024-11-20 10:48:12.650519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.424 request: 00:31:12.424 { 00:31:12.424 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:12.424 "namespace": { 00:31:12.424 "bdev_name": "Malloc0", 00:31:12.424 "no_auto_visible": false 00:31:12.424 }, 00:31:12.424 "method": "nvmf_subsystem_add_ns", 00:31:12.424 "req_id": 1 00:31:12.424 } 00:31:12.424 Got JSON-RPC error response 00:31:12.424 response: 00:31:12.424 { 00:31:12.424 "code": -32602, 00:31:12.424 "message": "Invalid parameters" 00:31:12.424 } 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:12.424 Adding namespace failed - expected result. 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:12.424 test case2: host connect to nvmf target in multiple paths 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 [2024-11-20 10:48:12.662582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:12.424 10:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:12.424 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:12.424 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:12.424 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:12.424 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:12.681 10:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:14.583 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:14.583 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:14.583 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:14.583 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:14.583 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:14.583 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:14.584 10:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:14.584 [global] 00:31:14.584 thread=1 00:31:14.584 invalidate=1 00:31:14.584 rw=write 00:31:14.584 time_based=1 00:31:14.584 runtime=1 00:31:14.584 ioengine=libaio 00:31:14.584 direct=1 00:31:14.584 bs=4096 00:31:14.584 iodepth=1 00:31:14.584 norandommap=0 00:31:14.584 numjobs=1 00:31:14.584 00:31:14.584 verify_dump=1 00:31:14.584 verify_backlog=512 00:31:14.584 verify_state_save=0 00:31:14.584 do_verify=1 00:31:14.584 verify=crc32c-intel 00:31:14.584 [job0] 00:31:14.584 filename=/dev/nvme0n1 00:31:14.584 Could not set queue depth (nvme0n1) 00:31:14.843 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:14.843 fio-3.35 00:31:14.843 Starting 1 thread 00:31:16.220 00:31:16.220 job0: (groupid=0, jobs=1): err= 0: pid=3710135: Wed Nov 20 10:48:16 2024 00:31:16.220 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:31:16.220 slat (nsec): min=10614, max=34398, avg=24249.82, stdev=3962.05 00:31:16.220 clat (usec): min=40898, max=42908, avg=41071.14, stdev=416.95 00:31:16.220 lat (usec): min=40922, max=42938, avg=41095.39, stdev=417.90 00:31:16.220 clat percentiles (usec): 00:31:16.220 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:16.220 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:16.220 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:16.220 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:16.220 | 99.99th=[42730] 00:31:16.220 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:31:16.220 slat (usec): min=10, max=25874, avg=62.67, stdev=1142.98 00:31:16.220 clat (usec): min=132, max=309, avg=144.00, stdev=14.44 00:31:16.220 lat (usec): min=143, max=26145, avg=206.67, stdev=1148.70 00:31:16.220 clat percentiles (usec): 00:31:16.220 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:31:16.220 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:31:16.220 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 157], 00:31:16.220 | 99.00th=[ 196], 99.50th=[ 231], 99.90th=[ 310], 99.95th=[ 310], 00:31:16.220 | 99.99th=[ 310] 00:31:16.220 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:16.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:16.220 lat (usec) : 250=95.51%, 500=0.37% 00:31:16.220 lat (msec) : 50=4.12% 00:31:16.220 cpu : usr=1.28%, sys=0.10%, ctx=536, majf=0, minf=1 00:31:16.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.220 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:16.220 00:31:16.220 Run status group 0 (all jobs): 00:31:16.220 READ: bw=86.8KiB/s (88.9kB/s), 86.8KiB/s-86.8KiB/s (88.9kB/s-88.9kB/s), io=88.0KiB (90.1kB), run=1014-1014msec 00:31:16.220 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:31:16.220 00:31:16.220 Disk stats (read/write): 00:31:16.220 nvme0n1: ios=45/512, merge=0/0, ticks=1767/66, in_queue=1833, util=98.30% 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:16.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.220 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.221 rmmod nvme_tcp 00:31:16.221 rmmod nvme_fabrics 00:31:16.221 rmmod nvme_keyring 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3709516 ']' 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3709516 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3709516 ']' 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3709516 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709516 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709516' 00:31:16.221 killing process with pid 3709516 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3709516 00:31:16.221 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3709516 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.480 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.013 00:31:19.013 real 0m13.075s 00:31:19.013 user 0m24.031s 00:31:19.013 sys 0m6.068s 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:19.013 ************************************ 00:31:19.013 END TEST nvmf_nmic 00:31:19.013 ************************************ 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.013 ************************************ 00:31:19.013 START TEST nvmf_fio_target 00:31:19.013 ************************************ 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:19.013 * Looking for test storage... 00:31:19.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.013 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.014 --rc genhtml_branch_coverage=1 00:31:19.014 --rc genhtml_function_coverage=1 00:31:19.014 --rc genhtml_legend=1 00:31:19.014 --rc geninfo_all_blocks=1 00:31:19.014 --rc geninfo_unexecuted_blocks=1 00:31:19.014 00:31:19.014 ' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.014 --rc genhtml_branch_coverage=1 00:31:19.014 --rc genhtml_function_coverage=1 00:31:19.014 --rc genhtml_legend=1 00:31:19.014 --rc geninfo_all_blocks=1 00:31:19.014 --rc geninfo_unexecuted_blocks=1 00:31:19.014 00:31:19.014 ' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.014 --rc genhtml_branch_coverage=1 00:31:19.014 --rc genhtml_function_coverage=1 00:31:19.014 --rc genhtml_legend=1 00:31:19.014 --rc geninfo_all_blocks=1 00:31:19.014 --rc geninfo_unexecuted_blocks=1 00:31:19.014 00:31:19.014 ' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.014 --rc genhtml_branch_coverage=1 00:31:19.014 --rc genhtml_function_coverage=1 00:31:19.014 --rc genhtml_legend=1 00:31:19.014 --rc geninfo_all_blocks=1 00:31:19.014 --rc geninfo_unexecuted_blocks=1 00:31:19.014 00:31:19.014 ' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:19.014 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.015 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.591 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:25.592 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:25.592 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:25.592 Found net devices under 0000:86:00.0: cvl_0_0 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:25.592 Found net devices under 0000:86:00.1: cvl_0_1 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.592 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:31:25.593 00:31:25.593 --- 10.0.0.2 ping statistics --- 00:31:25.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.593 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:25.593 00:31:25.593 --- 10.0.0.1 ping statistics --- 00:31:25.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.593 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3713886 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3713886 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3713886 ']' 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.593 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.593 [2024-11-20 10:48:25.413375] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.593 [2024-11-20 10:48:25.414365] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:31:25.593 [2024-11-20 10:48:25.414403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.593 [2024-11-20 10:48:25.494980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.593 [2024-11-20 10:48:25.538003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.593 [2024-11-20 10:48:25.538039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.593 [2024-11-20 10:48:25.538047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.593 [2024-11-20 10:48:25.538053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.593 [2024-11-20 10:48:25.538058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.593 [2024-11-20 10:48:25.539646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.593 [2024-11-20 10:48:25.539669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.593 [2024-11-20 10:48:25.539756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.593 [2024-11-20 10:48:25.539756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.593 [2024-11-20 10:48:25.608677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.593 [2024-11-20 10:48:25.608718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.593 [2024-11-20 10:48:25.609330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.594 [2024-11-20 10:48:25.609444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:25.594 [2024-11-20 10:48:25.609581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.594 [2024-11-20 10:48:25.844598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.594 10:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.594 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:25.594 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.852 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:25.852 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.852 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:25.852 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.109 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:26.109 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:26.367 10:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.625 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:26.625 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.625 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:26.625 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.884 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:26.884 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:27.143 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:27.400 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:27.400 10:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.400 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:27.401 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:27.658 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.916 [2024-11-20 10:48:28.484509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.916 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:28.176 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:28.434 10:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:28.434 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:28.434 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:28.434 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:28.434 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:28.434 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:28.434 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:30.957 10:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:30.957 [global] 00:31:30.957 thread=1 00:31:30.957 invalidate=1 00:31:30.957 rw=write 00:31:30.957 time_based=1 00:31:30.957 runtime=1 00:31:30.957 ioengine=libaio 00:31:30.957 direct=1 00:31:30.957 bs=4096 00:31:30.957 iodepth=1 00:31:30.957 norandommap=0 00:31:30.957 numjobs=1 00:31:30.957 00:31:30.957 verify_dump=1 00:31:30.957 verify_backlog=512 00:31:30.957 verify_state_save=0 00:31:30.957 do_verify=1 00:31:30.957 verify=crc32c-intel 00:31:30.957 [job0] 00:31:30.957 filename=/dev/nvme0n1 00:31:30.957 [job1] 00:31:30.957 filename=/dev/nvme0n2 00:31:30.957 [job2] 00:31:30.957 filename=/dev/nvme0n3 00:31:30.957 [job3] 00:31:30.957 filename=/dev/nvme0n4 00:31:30.957 Could not set queue depth (nvme0n1) 00:31:30.957 Could not set queue depth (nvme0n2) 00:31:30.957 Could not set queue depth (nvme0n3) 00:31:30.957 Could not set queue depth (nvme0n4) 00:31:30.957 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.957 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.957 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.958 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.958 fio-3.35 00:31:30.958 Starting 4 threads 00:31:32.333 00:31:32.333 job0: (groupid=0, jobs=1): err= 0: pid=3715008: Wed Nov 20 10:48:32 2024 00:31:32.333 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:32.333 slat (nsec): min=7098, max=27531, avg=8752.30, stdev=3139.88 00:31:32.333 clat (usec): min=196, max=42071, avg=1588.09, stdev=7310.48 00:31:32.333 lat (usec): min=204, max=42094, avg=1596.85, stdev=7312.81 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:31:32.333 | 30.00th=[ 223], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 245], 00:31:32.333 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:31:32.333 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:32.333 | 99.99th=[42206] 00:31:32.333 write: IOPS=719, BW=2877KiB/s (2946kB/s)(2880KiB/1001msec); 0 zone resets 00:31:32.333 slat (usec): min=10, max=25264, avg=47.26, stdev=941.08 00:31:32.333 clat (usec): min=125, max=348, avg=196.50, stdev=46.59 00:31:32.333 lat (usec): min=137, max=25598, avg=243.76, stdev=947.38 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 155], 00:31:32.333 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 182], 60.00th=[ 225], 00:31:32.333 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 281], 00:31:32.333 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 351], 00:31:32.333 | 99.99th=[ 351] 00:31:32.333 bw ( KiB/s): min= 4087, max= 4087, per=24.31%, avg=4087.00, stdev= 0.00, samples=1 00:31:32.333 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:32.333 lat (usec) : 250=86.93%, 500=11.69% 00:31:32.333 lat (msec) : 50=1.38% 00:31:32.333 cpu : usr=1.40%, sys=1.50%, ctx=1235, majf=0, minf=1 00:31:32.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.333 issued rwts: total=512,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:32.333 job1: (groupid=0, jobs=1): err= 0: pid=3715009: Wed Nov 20 10:48:32 2024 00:31:32.333 read: IOPS=1399, BW=5598KiB/s (5733kB/s)(5604KiB/1001msec) 00:31:32.333 slat (nsec): min=6460, max=26642, avg=7572.17, stdev=1864.54 00:31:32.333 clat (usec): min=169, max=41207, avg=491.80, stdev=3429.25 00:31:32.333 lat (usec): min=184, max=41215, avg=499.37, stdev=3429.57 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 184], 00:31:32.333 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:31:32.333 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 258], 00:31:32.333 | 99.00th=[ 297], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:31:32.333 | 99.99th=[41157] 00:31:32.333 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:31:32.333 slat (nsec): min=9197, max=39280, avg=10687.71, stdev=1589.51 00:31:32.333 clat (usec): min=118, max=2878, avg=178.61, stdev=87.30 00:31:32.333 lat (usec): min=128, max=2890, avg=189.29, stdev=87.53 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 128], 00:31:32.333 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 153], 60.00th=[ 186], 00:31:32.333 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 247], 00:31:32.333 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 400], 99.95th=[ 2868], 00:31:32.333 | 99.99th=[ 2868] 00:31:32.333 bw ( KiB/s): min= 7401, max= 7401, per=44.02%, avg=7401.00, stdev= 0.00, samples=1 00:31:32.333 iops : min= 1850, max= 1850, avg=1850.00, stdev= 0.00, samples=1 00:31:32.333 lat (usec) : 250=95.68%, 500=3.95% 00:31:32.333 lat (msec) : 4=0.03%, 50=0.34% 00:31:32.333 cpu : usr=1.20%, sys=3.00%, ctx=2939, majf=0, minf=1 00:31:32.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.333 issued rwts: total=1401,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:32.333 job2: (groupid=0, jobs=1): err= 0: pid=3715010: Wed Nov 20 10:48:32 2024 00:31:32.333 read: IOPS=1147, BW=4591KiB/s (4701kB/s)(4600KiB/1002msec) 00:31:32.333 slat (nsec): min=6398, max=28422, avg=7449.91, stdev=1817.79 00:31:32.333 clat (usec): min=186, max=41446, avg=608.48, stdev=3977.93 00:31:32.333 lat (usec): min=193, max=41452, avg=615.93, stdev=3978.61 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:31:32.333 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:31:32.333 | 70.00th=[ 210], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 281], 00:31:32.333 | 99.00th=[ 449], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:31:32.333 | 99.99th=[41681] 00:31:32.333 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:31:32.333 slat (nsec): min=9034, max=39209, avg=10232.86, stdev=1450.04 00:31:32.333 clat (usec): min=130, max=451, avg=177.30, stdev=61.12 00:31:32.333 lat (usec): min=139, max=461, avg=187.54, stdev=61.30 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:31:32.333 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:31:32.333 | 70.00th=[ 182], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 265], 00:31:32.333 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[ 441], 99.95th=[ 453], 00:31:32.333 | 99.99th=[ 453] 00:31:32.333 bw ( KiB/s): min= 4096, max= 8175, per=36.49%, avg=6135.50, stdev=2884.29, samples=2 00:31:32.333 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:31:32.333 lat (usec) : 250=88.87%, 500=10.72% 00:31:32.333 lat (msec) : 50=0.41% 00:31:32.333 cpu : usr=1.30%, sys=2.40%, ctx=2686, majf=0, minf=1 00:31:32.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.333 issued rwts: total=1150,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:32.333 job3: (groupid=0, jobs=1): err= 0: pid=3715011: Wed Nov 20 10:48:32 2024 00:31:32.333 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:31:32.333 slat (nsec): min=9747, max=23033, avg=21912.77, stdev=2747.23 00:31:32.333 clat (usec): min=40811, max=41998, avg=41030.14, stdev=235.31 00:31:32.333 lat (usec): min=40834, max=42020, avg=41052.05, stdev=234.72 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:32.333 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:32.333 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:32.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:32.333 | 99.99th=[42206] 00:31:32.333 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:31:32.333 slat (nsec): min=5864, max=41227, avg=11035.67, stdev=2176.22 00:31:32.333 clat (usec): min=135, max=475, avg=222.65, stdev=36.76 00:31:32.333 lat (usec): min=145, max=485, avg=233.69, stdev=37.31 00:31:32.333 clat percentiles (usec): 00:31:32.333 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:31:32.333 | 30.00th=[ 198], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 233], 00:31:32.333 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 281], 00:31:32.333 | 99.00th=[ 322], 99.50th=[ 416], 99.90th=[ 474], 99.95th=[ 474], 00:31:32.333 | 99.99th=[ 474] 00:31:32.333 bw ( KiB/s): min= 4087, max= 4087, per=24.31%, avg=4087.00, stdev= 0.00, samples=1 00:31:32.334 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:32.334 lat (usec) : 250=84.08%, 500=11.80% 00:31:32.334 lat (msec) : 50=4.12% 00:31:32.334 cpu : usr=0.29%, sys=0.59%, ctx=534, majf=0, minf=2 00:31:32.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.334 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:32.334 00:31:32.334 Run status group 0 (all jobs): 00:31:32.334 READ: bw=11.8MiB/s (12.3MB/s), 85.9KiB/s-5598KiB/s (88.0kB/s-5733kB/s), io=12.1MiB (12.6MB), run=1001-1024msec 00:31:32.334 WRITE: bw=16.4MiB/s (17.2MB/s), 2000KiB/s-6138KiB/s (2048kB/s-6285kB/s), io=16.8MiB (17.6MB), run=1001-1024msec 00:31:32.334 00:31:32.334 Disk stats (read/write): 00:31:32.334 nvme0n1: ios=333/512, merge=0/0, ticks=1711/111, in_queue=1822, util=97.90% 00:31:32.334 nvme0n2: ios=1066/1152, merge=0/0, ticks=760/210, in_queue=970, util=100.00% 00:31:32.334 nvme0n3: ios=1146/1536, merge=0/0, ticks=530/263, in_queue=793, util=89.05% 00:31:32.334 nvme0n4: ios=17/512, merge=0/0, ticks=698/109, in_queue=807, util=89.71% 00:31:32.334 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:32.334 [global] 00:31:32.334 thread=1 00:31:32.334 invalidate=1 00:31:32.334 rw=randwrite 00:31:32.334 time_based=1 00:31:32.334 runtime=1 00:31:32.334 ioengine=libaio 00:31:32.334 direct=1 00:31:32.334 bs=4096 00:31:32.334 iodepth=1 00:31:32.334 norandommap=0 00:31:32.334 numjobs=1 00:31:32.334 00:31:32.334 verify_dump=1 00:31:32.334 verify_backlog=512 00:31:32.334 verify_state_save=0 00:31:32.334 do_verify=1 00:31:32.334 verify=crc32c-intel 00:31:32.334 [job0] 00:31:32.334 filename=/dev/nvme0n1 00:31:32.334 [job1] 00:31:32.334 filename=/dev/nvme0n2 00:31:32.334 [job2] 00:31:32.334 filename=/dev/nvme0n3 00:31:32.334 [job3] 00:31:32.334 filename=/dev/nvme0n4 00:31:32.334 Could not set queue depth (nvme0n1) 00:31:32.334 Could not set queue depth (nvme0n2) 00:31:32.334 Could not set queue depth (nvme0n3) 00:31:32.334 Could not set queue depth (nvme0n4) 00:31:32.334 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.334 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.334 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.334 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.334 fio-3.35 00:31:32.334 Starting 4 threads 00:31:33.711 00:31:33.711 job0: (groupid=0, jobs=1): err= 0: pid=3715377: Wed Nov 20 10:48:34 2024 00:31:33.711 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:31:33.711 slat (nsec): min=10129, max=24080, avg=22626.43, stdev=2814.56 00:31:33.711 clat (usec): min=40764, max=41088, avg=40958.09, stdev=80.20 00:31:33.711 lat (usec): min=40774, max=41109, avg=40980.72, stdev=81.37 00:31:33.711 clat percentiles (usec): 00:31:33.711 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:31:33.711 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:33.711 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:33.711 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:33.711 | 99.99th=[41157] 00:31:33.711 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:31:33.711 slat (nsec): min=11069, max=39583, avg=12163.96, stdev=1900.15 00:31:33.711 clat (usec): min=151, max=328, avg=171.19, stdev=13.71 00:31:33.711 lat (usec): min=162, max=368, avg=183.36, stdev=14.47 00:31:33.711 clat percentiles (usec): 00:31:33.711 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 159], 20.00th=[ 163], 00:31:33.711 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:31:33.711 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 192], 00:31:33.711 | 99.00th=[ 210], 99.50th=[ 265], 99.90th=[ 330], 99.95th=[ 330], 00:31:33.711 | 99.99th=[ 330] 00:31:33.711 bw ( KiB/s): min= 4096, max= 4096, per=23.35%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.712 lat (usec) : 250=95.14%, 500=0.56% 00:31:33.712 lat (msec) : 50=4.30% 00:31:33.712 cpu : usr=0.67%, sys=0.67%, ctx=538, majf=0, minf=1 00:31:33.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.712 job1: (groupid=0, jobs=1): err= 0: pid=3715380: Wed Nov 20 10:48:34 2024 00:31:33.712 read: IOPS=23, BW=92.6KiB/s (94.8kB/s)(96.0KiB/1037msec) 00:31:33.712 slat (nsec): min=10269, max=37484, avg=22576.75, stdev=5563.69 00:31:33.712 clat (usec): min=277, max=41159, avg=39267.09, stdev=8305.26 00:31:33.712 lat (usec): min=302, max=41169, avg=39289.67, stdev=8304.72 00:31:33.712 clat percentiles (usec): 00:31:33.712 | 1.00th=[ 277], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:33.712 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:33.712 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:33.712 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:33.712 | 99.99th=[41157] 00:31:33.712 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:31:33.712 slat (nsec): min=10345, max=38460, avg=11682.36, stdev=2265.76 00:31:33.712 clat (usec): min=149, max=290, avg=167.48, stdev=13.08 00:31:33.712 lat (usec): min=160, max=318, avg=179.17, stdev=13.79 00:31:33.712 clat percentiles (usec): 00:31:33.712 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:31:33.712 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:31:33.712 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:31:33.712 | 99.00th=[ 202], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 289], 00:31:33.712 | 99.99th=[ 289] 00:31:33.712 bw ( KiB/s): min= 4096, max= 4096, per=23.35%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.712 lat (usec) : 250=94.96%, 500=0.75% 00:31:33.712 lat (msec) : 50=4.29% 00:31:33.712 cpu : usr=0.68%, sys=0.68%, ctx=537, majf=0, minf=1 00:31:33.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.712 job2: (groupid=0, jobs=1): err= 0: pid=3715381: Wed Nov 20 10:48:34 2024 00:31:33.712 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:33.712 slat (nsec): min=6478, max=29145, avg=7461.44, stdev=903.64 00:31:33.712 clat (usec): min=177, max=406, avg=190.60, stdev= 9.94 00:31:33.712 lat (usec): min=184, max=413, avg=198.06, stdev= 9.99 00:31:33.712 clat percentiles (usec): 00:31:33.712 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 184], 20.00th=[ 186], 00:31:33.712 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 190], 00:31:33.712 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 204], 00:31:33.712 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 260], 00:31:33.712 | 99.99th=[ 408] 00:31:33.712 write: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:31:33.712 slat (nsec): min=9077, max=40314, avg=10091.15, stdev=1278.95 00:31:33.712 clat (usec): min=119, max=368, avg=149.51, stdev=26.23 00:31:33.712 lat (usec): min=136, max=393, avg=159.60, stdev=26.37 00:31:33.712 clat percentiles (usec): 00:31:33.712 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 135], 00:31:33.712 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:31:33.712 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 192], 95.00th=[ 204], 00:31:33.712 | 99.00th=[ 231], 99.50th=[ 285], 99.90th=[ 363], 99.95th=[ 367], 00:31:33.712 | 99.99th=[ 371] 00:31:33.712 bw ( KiB/s): min=12288, max=12288, per=70.04%, avg=12288.00, stdev= 0.00, samples=1 00:31:33.712 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:33.712 lat (usec) : 250=99.44%, 500=0.56% 00:31:33.712 cpu : usr=2.70%, sys=5.00%, ctx=5581, majf=0, minf=1 00:31:33.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 issued rwts: total=2560,3021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.712 job3: (groupid=0, jobs=1): err= 0: pid=3715382: Wed Nov 20 10:48:34 2024 00:31:33.712 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:31:33.712 slat (nsec): min=9559, max=23657, avg=22108.18, stdev=2850.73 00:31:33.712 clat (usec): min=40629, max=41058, avg=40948.95, stdev=95.72 00:31:33.712 lat (usec): min=40638, max=41080, avg=40971.05, stdev=97.69 00:31:33.712 clat percentiles (usec): 00:31:33.712 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:33.712 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:33.712 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:33.712 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:33.712 | 99.99th=[41157] 00:31:33.712 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:33.712 slat (nsec): min=10476, max=38671, avg=11850.74, stdev=2189.36 00:31:33.712 clat (usec): min=141, max=355, avg=189.81, stdev=24.34 00:31:33.712 lat (usec): min=152, max=366, avg=201.66, stdev=24.74 00:31:33.712 clat percentiles (usec): 00:31:33.712 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 174], 00:31:33.712 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:31:33.712 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 223], 00:31:33.712 | 99.00th=[ 260], 99.50th=[ 302], 99.90th=[ 355], 99.95th=[ 355], 00:31:33.712 | 99.99th=[ 355] 00:31:33.712 bw ( KiB/s): min= 4096, max= 4096, per=23.35%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.712 lat (usec) : 250=94.38%, 500=1.50% 00:31:33.712 lat (msec) : 50=4.12% 00:31:33.712 cpu : usr=0.70%, sys=0.70%, ctx=535, majf=0, minf=1 00:31:33.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.712 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.712 00:31:33.712 Run status group 0 (all jobs): 00:31:33.712 READ: bw=9.88MiB/s (10.4MB/s), 87.4KiB/s-9.99MiB/s (89.5kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1039msec 00:31:33.712 WRITE: bw=17.1MiB/s (18.0MB/s), 1971KiB/s-11.8MiB/s (2018kB/s-12.4MB/s), io=17.8MiB (18.7MB), run=1001-1039msec 00:31:33.712 00:31:33.712 Disk stats (read/write): 00:31:33.712 nvme0n1: ios=68/512, merge=0/0, ticks=1659/87, in_queue=1746, util=90.18% 00:31:33.712 nvme0n2: ios=54/512, merge=0/0, ticks=1203/72, in_queue=1275, util=98.17% 00:31:33.712 nvme0n3: ios=2284/2560, merge=0/0, ticks=510/364, in_queue=874, util=90.97% 00:31:33.712 nvme0n4: ios=44/512, merge=0/0, ticks=1682/85, in_queue=1767, util=98.43% 00:31:33.712 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:33.712 [global] 00:31:33.712 thread=1 00:31:33.712 invalidate=1 00:31:33.712 rw=write 00:31:33.712 time_based=1 00:31:33.712 runtime=1 00:31:33.712 ioengine=libaio 00:31:33.712 direct=1 00:31:33.712 bs=4096 00:31:33.712 iodepth=128 00:31:33.712 norandommap=0 00:31:33.712 numjobs=1 00:31:33.712 00:31:33.712 verify_dump=1 00:31:33.712 verify_backlog=512 00:31:33.712 verify_state_save=0 00:31:33.712 do_verify=1 00:31:33.712 verify=crc32c-intel 00:31:33.712 [job0] 00:31:33.712 filename=/dev/nvme0n1 00:31:33.712 [job1] 00:31:33.712 filename=/dev/nvme0n2 00:31:33.712 [job2] 00:31:33.712 filename=/dev/nvme0n3 00:31:33.712 [job3] 00:31:33.712 filename=/dev/nvme0n4 00:31:33.712 Could not set queue depth (nvme0n1) 00:31:33.712 Could not set queue depth (nvme0n2) 00:31:33.712 Could not set queue depth (nvme0n3) 00:31:33.712 Could not set queue depth (nvme0n4) 00:31:33.970 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.970 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.970 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.970 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.970 fio-3.35 00:31:33.970 Starting 4 threads 00:31:35.347 00:31:35.347 job0: (groupid=0, jobs=1): err= 0: pid=3715753: Wed Nov 20 10:48:35 2024 00:31:35.347 read: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(10.7MiB/1046msec) 00:31:35.347 slat (usec): min=2, max=13819, avg=180.88, stdev=1065.83 00:31:35.347 clat (usec): min=10215, max=71813, avg=24347.55, stdev=10994.31 00:31:35.347 lat (usec): min=10220, max=79934, avg=24528.43, stdev=11064.88 00:31:35.347 clat percentiles (usec): 00:31:35.347 | 1.00th=[11076], 5.00th=[11994], 10.00th=[13435], 20.00th=[15533], 00:31:35.347 | 30.00th=[17957], 40.00th=[20841], 50.00th=[23725], 60.00th=[25035], 00:31:35.347 | 70.00th=[26870], 80.00th=[28967], 90.00th=[34341], 95.00th=[41681], 00:31:35.347 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:31:35.347 | 99.99th=[71828] 00:31:35.347 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1046msec); 0 zone resets 00:31:35.347 slat (usec): min=2, max=17224, avg=156.38, stdev=945.72 00:31:35.347 clat (usec): min=529, max=41221, avg=20781.40, stdev=6457.77 00:31:35.347 lat (usec): min=551, max=41229, avg=20937.78, stdev=6534.64 00:31:35.347 clat percentiles (usec): 00:31:35.347 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[12256], 20.00th=[15926], 00:31:35.347 | 30.00th=[16909], 40.00th=[19268], 50.00th=[20841], 60.00th=[22676], 00:31:35.347 | 70.00th=[24249], 80.00th=[25560], 90.00th=[28181], 95.00th=[31589], 00:31:35.347 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:31:35.347 | 99.99th=[41157] 00:31:35.347 bw ( KiB/s): min=12288, max=12288, per=19.56%, avg=12288.00, stdev= 0.00, samples=2 00:31:35.347 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:31:35.347 lat (usec) : 750=0.03%, 1000=0.09% 00:31:35.347 lat (msec) : 10=3.20%, 20=40.13%, 50=54.37%, 100=2.17% 00:31:35.347 cpu : usr=2.68%, sys=4.11%, ctx=222, majf=0, minf=1 00:31:35.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:35.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.347 issued rwts: total=2736,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.347 job1: (groupid=0, jobs=1): err= 0: pid=3715754: Wed Nov 20 10:48:35 2024 00:31:35.347 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:31:35.347 slat (nsec): min=1578, max=9686.6k, avg=132916.93, stdev=862264.91 00:31:35.347 clat (usec): min=9503, max=41891, avg=17710.76, stdev=4037.93 00:31:35.347 lat (usec): min=9510, max=41897, avg=17843.67, stdev=4101.43 00:31:35.347 clat percentiles (usec): 00:31:35.347 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[13173], 20.00th=[14746], 00:31:35.347 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[17957], 00:31:35.347 | 70.00th=[19268], 80.00th=[20579], 90.00th=[21890], 95.00th=[23987], 00:31:35.347 | 99.00th=[29230], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:31:35.347 | 99.99th=[41681] 00:31:35.347 write: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1005msec); 0 zone resets 00:31:35.347 slat (usec): min=2, max=16525, avg=173.01, stdev=922.50 00:31:35.347 clat (usec): min=851, max=60695, avg=21425.96, stdev=10952.99 00:31:35.347 lat (usec): min=7994, max=60715, avg=21598.97, stdev=11036.96 00:31:35.347 clat percentiles (usec): 00:31:35.347 | 1.00th=[ 8455], 5.00th=[11469], 10.00th=[13435], 20.00th=[14615], 00:31:35.347 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16909], 60.00th=[17433], 00:31:35.347 | 70.00th=[20841], 80.00th=[27132], 90.00th=[42730], 95.00th=[45876], 00:31:35.347 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:31:35.347 | 99.99th=[60556] 00:31:35.347 bw ( KiB/s): min=12480, max=12920, per=20.22%, avg=12700.00, stdev=311.13, samples=2 00:31:35.347 iops : min= 3120, max= 3230, avg=3175.00, stdev=77.78, samples=2 00:31:35.347 lat (usec) : 1000=0.02% 00:31:35.347 lat (msec) : 10=2.27%, 20=70.68%, 50=25.57%, 100=1.46% 00:31:35.347 cpu : usr=3.78%, sys=4.28%, ctx=255, majf=0, minf=1 00:31:35.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:35.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.347 issued rwts: total=3072,3303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.347 job2: (groupid=0, jobs=1): err= 0: pid=3715756: Wed Nov 20 10:48:35 2024 00:31:35.347 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:31:35.347 slat (nsec): min=1163, max=14261k, avg=123627.98, stdev=936326.67 00:31:35.347 clat (usec): min=6117, max=51018, avg=16466.49, stdev=9992.82 00:31:35.347 lat (usec): min=6123, max=54992, avg=16590.12, stdev=10063.96 00:31:35.347 clat percentiles (usec): 00:31:35.347 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10683], 00:31:35.347 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12125], 60.00th=[13042], 00:31:35.347 | 70.00th=[16450], 80.00th=[18744], 90.00th=[34341], 95.00th=[41681], 00:31:35.347 | 99.00th=[46924], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:31:35.347 | 99.99th=[51119] 00:31:35.347 write: IOPS=3874, BW=15.1MiB/s (15.9MB/s)(15.3MiB/1008msec); 0 zone resets 00:31:35.347 slat (nsec): min=1812, max=11930k, avg=128097.95, stdev=858691.06 00:31:35.348 clat (usec): min=983, max=66727, avg=17585.26, stdev=12281.41 00:31:35.348 lat (usec): min=991, max=66731, avg=17713.36, stdev=12349.41 00:31:35.348 clat percentiles (usec): 00:31:35.348 | 1.00th=[ 5211], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 8717], 00:31:35.348 | 30.00th=[10683], 40.00th=[11338], 50.00th=[13698], 60.00th=[15664], 00:31:35.348 | 70.00th=[19268], 80.00th=[23462], 90.00th=[32375], 95.00th=[47449], 00:31:35.348 | 99.00th=[61604], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:31:35.348 | 99.99th=[66847] 00:31:35.348 bw ( KiB/s): min=12288, max=17928, per=24.05%, avg=15108.00, stdev=3988.08, samples=2 00:31:35.348 iops : min= 3072, max= 4482, avg=3777.00, stdev=997.02, samples=2 00:31:35.348 lat (usec) : 1000=0.07% 00:31:35.348 lat (msec) : 2=0.03%, 4=0.21%, 10=18.91%, 20=58.53%, 50=20.15% 00:31:35.348 lat (msec) : 100=2.11% 00:31:35.348 cpu : usr=2.58%, sys=4.27%, ctx=240, majf=0, minf=1 00:31:35.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:35.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.348 issued rwts: total=3584,3905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.348 job3: (groupid=0, jobs=1): err= 0: pid=3715757: Wed Nov 20 10:48:35 2024 00:31:35.348 read: IOPS=5690, BW=22.2MiB/s (23.3MB/s)(22.4MiB/1009msec) 00:31:35.348 slat (nsec): min=1424, max=16261k, avg=81993.99, stdev=676417.54 00:31:35.348 clat (usec): min=3614, max=25557, avg=10703.59, stdev=3127.78 00:31:35.348 lat (usec): min=3621, max=25567, avg=10785.59, stdev=3182.92 00:31:35.348 clat percentiles (usec): 00:31:35.348 | 1.00th=[ 6915], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 8291], 00:31:35.348 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10683], 00:31:35.348 | 70.00th=[11600], 80.00th=[12649], 90.00th=[14222], 95.00th=[17433], 00:31:35.348 | 99.00th=[22676], 99.50th=[23200], 99.90th=[25035], 99.95th=[25035], 00:31:35.348 | 99.99th=[25560] 00:31:35.348 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec); 0 zone resets 00:31:35.348 slat (usec): min=2, max=16640, avg=79.85, stdev=579.29 00:31:35.348 clat (usec): min=1931, max=28038, avg=10224.13, stdev=3754.72 00:31:35.348 lat (usec): min=2432, max=37727, avg=10303.98, stdev=3795.39 00:31:35.348 clat percentiles (usec): 00:31:35.348 | 1.00th=[ 4015], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 7898], 00:31:35.348 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9896], 00:31:35.348 | 70.00th=[11207], 80.00th=[11863], 90.00th=[15139], 95.00th=[17171], 00:31:35.348 | 99.00th=[25035], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:31:35.348 | 99.99th=[27919] 00:31:35.348 bw ( KiB/s): min=24456, max=24560, per=39.02%, avg=24508.00, stdev=73.54, samples=2 00:31:35.348 iops : min= 6114, max= 6140, avg=6127.00, stdev=18.38, samples=2 00:31:35.348 lat (msec) : 2=0.01%, 4=0.57%, 10=54.56%, 20=42.61%, 50=2.25% 00:31:35.348 cpu : usr=4.07%, sys=8.53%, ctx=430, majf=0, minf=1 00:31:35.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:35.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.348 issued rwts: total=5742,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.348 00:31:35.348 Run status group 0 (all jobs): 00:31:35.348 READ: bw=56.5MiB/s (59.3MB/s), 10.2MiB/s-22.2MiB/s (10.7MB/s-23.3MB/s), io=59.1MiB (62.0MB), run=1005-1046msec 00:31:35.348 WRITE: bw=61.3MiB/s (64.3MB/s), 11.5MiB/s-23.8MiB/s (12.0MB/s-24.9MB/s), io=64.2MiB (67.3MB), run=1005-1046msec 00:31:35.348 00:31:35.348 Disk stats (read/write): 00:31:35.348 nvme0n1: ios=2266/2560, merge=0/0, ticks=16826/18195, in_queue=35021, util=87.66% 00:31:35.348 nvme0n2: ios=2373/2560, merge=0/0, ticks=20197/28811, in_queue=49008, util=91.57% 00:31:35.348 nvme0n3: ios=2617/2841, merge=0/0, ticks=24637/25194, in_queue=49831, util=89.82% 00:31:35.348 nvme0n4: ios=4668/4927, merge=0/0, ticks=46440/45248, in_queue=91688, util=100.00% 00:31:35.348 10:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:35.348 [global] 00:31:35.348 thread=1 00:31:35.348 invalidate=1 00:31:35.348 rw=randwrite 00:31:35.348 time_based=1 00:31:35.348 runtime=1 00:31:35.348 ioengine=libaio 00:31:35.348 direct=1 00:31:35.348 bs=4096 00:31:35.348 iodepth=128 00:31:35.348 norandommap=0 00:31:35.348 numjobs=1 00:31:35.348 00:31:35.348 verify_dump=1 00:31:35.348 verify_backlog=512 00:31:35.348 verify_state_save=0 00:31:35.348 do_verify=1 00:31:35.348 verify=crc32c-intel 00:31:35.348 [job0] 00:31:35.348 filename=/dev/nvme0n1 00:31:35.348 [job1] 00:31:35.348 filename=/dev/nvme0n2 00:31:35.348 [job2] 00:31:35.348 filename=/dev/nvme0n3 00:31:35.348 [job3] 00:31:35.348 filename=/dev/nvme0n4 00:31:35.348 Could not set queue depth (nvme0n1) 00:31:35.348 Could not set queue depth (nvme0n2) 00:31:35.348 Could not set queue depth (nvme0n3) 00:31:35.348 Could not set queue depth (nvme0n4) 00:31:35.607 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.608 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.608 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.608 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.608 fio-3.35 00:31:35.608 Starting 4 threads 00:31:36.986 00:31:36.986 job0: (groupid=0, jobs=1): err= 0: pid=3716130: Wed Nov 20 10:48:37 2024 00:31:36.986 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:31:36.986 slat (nsec): min=1312, max=11703k, avg=85147.52, stdev=670833.31 00:31:36.986 clat (usec): min=2880, max=29195, avg=10935.60, stdev=3624.62 00:31:36.986 lat (usec): min=4785, max=29201, avg=11020.75, stdev=3670.78 00:31:36.986 clat percentiles (usec): 00:31:36.986 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7635], 00:31:36.986 | 30.00th=[ 8586], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11338], 00:31:36.986 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14877], 95.00th=[17171], 00:31:36.986 | 99.00th=[25822], 99.50th=[27657], 99.90th=[28705], 99.95th=[29230], 00:31:36.986 | 99.99th=[29230] 00:31:36.986 write: IOPS=5855, BW=22.9MiB/s (24.0MB/s)(23.1MiB/1008msec); 0 zone resets 00:31:36.986 slat (usec): min=2, max=11780, avg=78.10, stdev=512.56 00:31:36.986 clat (usec): min=524, max=33870, avg=11164.23, stdev=6034.65 00:31:36.986 lat (usec): min=534, max=33878, avg=11242.33, stdev=6078.68 00:31:36.986 clat percentiles (usec): 00:31:36.986 | 1.00th=[ 3425], 5.00th=[ 5538], 10.00th=[ 6849], 20.00th=[ 7701], 00:31:36.986 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[10159], 00:31:36.986 | 70.00th=[11731], 80.00th=[14091], 90.00th=[17695], 95.00th=[26346], 00:31:36.986 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:31:36.986 | 99.99th=[33817] 00:31:36.986 bw ( KiB/s): min=20480, max=25720, per=35.96%, avg=23100.00, stdev=3705.24, samples=2 00:31:36.986 iops : min= 5120, max= 6430, avg=5775.00, stdev=926.31, samples=2 00:31:36.986 lat (usec) : 750=0.03% 00:31:36.986 lat (msec) : 2=0.05%, 4=0.95%, 10=48.83%, 20=45.04%, 50=5.11% 00:31:36.986 cpu : usr=5.16%, sys=5.96%, ctx=548, majf=0, minf=1 00:31:36.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:36.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.986 issued rwts: total=5632,5902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.986 job1: (groupid=0, jobs=1): err= 0: pid=3716131: Wed Nov 20 10:48:37 2024 00:31:36.986 read: IOPS=4160, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1011msec) 00:31:36.986 slat (nsec): min=1092, max=12371k, avg=117548.35, stdev=825099.05 00:31:36.986 clat (usec): min=1353, max=66914, avg=13901.38, stdev=6975.50 00:31:36.986 lat (usec): min=1365, max=66926, avg=14018.93, stdev=7044.02 00:31:36.986 clat percentiles (usec): 00:31:36.986 | 1.00th=[ 3425], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[ 8979], 00:31:36.986 | 30.00th=[10028], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:31:36.986 | 70.00th=[15008], 80.00th=[17433], 90.00th=[21627], 95.00th=[24511], 00:31:36.986 | 99.00th=[41681], 99.50th=[51643], 99.90th=[66847], 99.95th=[66847], 00:31:36.986 | 99.99th=[66847] 00:31:36.986 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:31:36.986 slat (nsec): min=1964, max=10332k, avg=101899.10, stdev=624526.05 00:31:36.986 clat (usec): min=769, max=76393, avg=15108.80, stdev=12490.51 00:31:36.986 lat (usec): min=777, max=76406, avg=15210.70, stdev=12567.74 00:31:36.986 clat percentiles (usec): 00:31:36.986 | 1.00th=[ 2474], 5.00th=[ 4359], 10.00th=[ 6521], 20.00th=[ 8029], 00:31:36.986 | 30.00th=[ 8717], 40.00th=[10159], 50.00th=[11076], 60.00th=[12649], 00:31:36.986 | 70.00th=[14746], 80.00th=[17957], 90.00th=[32113], 95.00th=[49021], 00:31:36.986 | 99.00th=[65799], 99.50th=[67634], 99.90th=[76022], 99.95th=[76022], 00:31:36.986 | 99.99th=[76022] 00:31:36.986 bw ( KiB/s): min=14920, max=21800, per=28.58%, avg=18360.00, stdev=4864.89, samples=2 00:31:36.987 iops : min= 3730, max= 5450, avg=4590.00, stdev=1216.22, samples=2 00:31:36.987 lat (usec) : 1000=0.07% 00:31:36.987 lat (msec) : 2=0.61%, 4=1.87%, 10=31.59%, 20=52.98%, 50=10.23% 00:31:36.987 lat (msec) : 100=2.64% 00:31:36.987 cpu : usr=3.76%, sys=4.95%, ctx=393, majf=0, minf=1 00:31:36.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:36.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.987 issued rwts: total=4206,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.987 job2: (groupid=0, jobs=1): err= 0: pid=3716132: Wed Nov 20 10:48:37 2024 00:31:36.987 read: IOPS=2153, BW=8614KiB/s (8821kB/s)(8700KiB/1010msec) 00:31:36.987 slat (nsec): min=1179, max=22410k, avg=225826.34, stdev=1527606.13 00:31:36.987 clat (usec): min=7126, max=56551, avg=28991.35, stdev=8656.00 00:31:36.987 lat (usec): min=8770, max=56576, avg=29217.18, stdev=8713.31 00:31:36.987 clat percentiles (usec): 00:31:36.987 | 1.00th=[11076], 5.00th=[13042], 10.00th=[14877], 20.00th=[22152], 00:31:36.987 | 30.00th=[25035], 40.00th=[27657], 50.00th=[29492], 60.00th=[32375], 00:31:36.987 | 70.00th=[33817], 80.00th=[36439], 90.00th=[39060], 95.00th=[41157], 00:31:36.987 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[50594], 00:31:36.987 | 99.99th=[56361] 00:31:36.987 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:31:36.987 slat (usec): min=2, max=25813, avg=193.28, stdev=1197.85 00:31:36.987 clat (usec): min=7085, max=54754, avg=25135.55, stdev=10481.17 00:31:36.987 lat (usec): min=7096, max=54813, avg=25328.83, stdev=10570.67 00:31:36.987 clat percentiles (usec): 00:31:36.987 | 1.00th=[10028], 5.00th=[11994], 10.00th=[15926], 20.00th=[17695], 00:31:36.987 | 30.00th=[18744], 40.00th=[19792], 50.00th=[21890], 60.00th=[23725], 00:31:36.987 | 70.00th=[27919], 80.00th=[31589], 90.00th=[43779], 95.00th=[48497], 00:31:36.987 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:31:36.987 | 99.99th=[54789] 00:31:36.987 bw ( KiB/s): min= 9080, max=11392, per=15.93%, avg=10236.00, stdev=1634.83, samples=2 00:31:36.987 iops : min= 2270, max= 2848, avg=2559.00, stdev=408.71, samples=2 00:31:36.987 lat (msec) : 10=0.68%, 20=29.50%, 50=67.50%, 100=2.32% 00:31:36.987 cpu : usr=1.09%, sys=3.17%, ctx=233, majf=0, minf=1 00:31:36.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:36.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.987 issued rwts: total=2175,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.987 job3: (groupid=0, jobs=1): err= 0: pid=3716133: Wed Nov 20 10:48:37 2024 00:31:36.987 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:31:36.987 slat (nsec): min=1257, max=21151k, avg=156115.25, stdev=1017817.21 00:31:36.987 clat (usec): min=4248, max=65233, avg=20448.39, stdev=10340.01 00:31:36.987 lat (usec): min=4254, max=65298, avg=20604.51, stdev=10418.91 00:31:36.987 clat percentiles (usec): 00:31:36.987 | 1.00th=[ 6390], 5.00th=[ 9110], 10.00th=[11994], 20.00th=[12256], 00:31:36.987 | 30.00th=[13042], 40.00th=[14746], 50.00th=[17433], 60.00th=[19530], 00:31:36.987 | 70.00th=[23725], 80.00th=[28967], 90.00th=[35390], 95.00th=[43254], 00:31:36.987 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53216], 99.95th=[53740], 00:31:36.987 | 99.99th=[65274] 00:31:36.987 write: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1012msec); 0 zone resets 00:31:36.987 slat (nsec): min=2000, max=17544k, avg=156960.87, stdev=1032623.35 00:31:36.987 clat (usec): min=6854, max=54817, avg=20354.63, stdev=10201.52 00:31:36.987 lat (usec): min=6862, max=54825, avg=20511.59, stdev=10282.71 00:31:36.987 clat percentiles (usec): 00:31:36.987 | 1.00th=[ 7373], 5.00th=[10945], 10.00th=[11994], 20.00th=[12256], 00:31:36.987 | 30.00th=[12911], 40.00th=[13698], 50.00th=[16909], 60.00th=[19268], 00:31:36.987 | 70.00th=[23462], 80.00th=[26870], 90.00th=[34341], 95.00th=[43254], 00:31:36.987 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:31:36.987 | 99.99th=[54789] 00:31:36.987 bw ( KiB/s): min=10928, max=13648, per=19.13%, avg=12288.00, stdev=1923.33, samples=2 00:31:36.987 iops : min= 2732, max= 3412, avg=3072.00, stdev=480.83, samples=2 00:31:36.987 lat (msec) : 10=4.68%, 20=57.46%, 50=36.24%, 100=1.61% 00:31:36.987 cpu : usr=2.37%, sys=3.56%, ctx=241, majf=0, minf=1 00:31:36.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:36.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.987 issued rwts: total=3072,3183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.987 00:31:36.987 Run status group 0 (all jobs): 00:31:36.987 READ: bw=58.2MiB/s (61.1MB/s), 8614KiB/s-21.8MiB/s (8821kB/s-22.9MB/s), io=58.9MiB (61.8MB), run=1008-1012msec 00:31:36.987 WRITE: bw=62.7MiB/s (65.8MB/s), 9.90MiB/s-22.9MiB/s (10.4MB/s-24.0MB/s), io=63.5MiB (66.6MB), run=1008-1012msec 00:31:36.987 00:31:36.987 Disk stats (read/write): 00:31:36.987 nvme0n1: ios=4264/4608, merge=0/0, ticks=48107/51557, in_queue=99664, util=93.39% 00:31:36.987 nvme0n2: ios=3623/3991, merge=0/0, ticks=46065/48440, in_queue=94505, util=97.64% 00:31:36.987 nvme0n3: ios=1580/1975, merge=0/0, ticks=24642/26179, in_queue=50821, util=96.42% 00:31:36.987 nvme0n4: ios=2589/2583, merge=0/0, ticks=21893/19378, in_queue=41271, util=98.79% 00:31:36.987 10:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:36.987 10:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3716360 00:31:36.987 10:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:36.987 10:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:36.987 [global] 00:31:36.987 thread=1 00:31:36.987 invalidate=1 00:31:36.987 rw=read 00:31:36.987 time_based=1 00:31:36.987 runtime=10 00:31:36.987 ioengine=libaio 00:31:36.987 direct=1 00:31:36.987 bs=4096 00:31:36.987 iodepth=1 00:31:36.987 norandommap=1 00:31:36.987 numjobs=1 00:31:36.987 00:31:36.987 [job0] 00:31:36.987 filename=/dev/nvme0n1 00:31:36.987 [job1] 00:31:36.987 filename=/dev/nvme0n2 00:31:36.987 [job2] 00:31:36.987 filename=/dev/nvme0n3 00:31:36.987 [job3] 00:31:36.987 filename=/dev/nvme0n4 00:31:36.987 Could not set queue depth (nvme0n1) 00:31:36.987 Could not set queue depth (nvme0n2) 00:31:36.987 Could not set queue depth (nvme0n3) 00:31:36.987 Could not set queue depth (nvme0n4) 00:31:37.246 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.246 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.246 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.246 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.246 fio-3.35 00:31:37.246 Starting 4 threads 00:31:40.530 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:40.530 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:40.530 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:31:40.530 fio: pid=3716507, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:40.530 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=23973888, buflen=4096 00:31:40.530 fio: pid=3716506, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:40.530 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:40.530 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:40.530 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:40.530 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:40.530 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:31:40.530 fio: pid=3716500, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:40.790 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:40.790 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:40.790 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=335872, buflen=4096 00:31:40.790 fio: pid=3716503, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:40.790 00:31:40.790 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3716500: Wed Nov 20 10:48:41 2024 00:31:40.790 read: IOPS=24, BW=97.7KiB/s (100kB/s)(308KiB/3151msec) 00:31:40.790 slat (usec): min=10, max=8664, avg=133.75, stdev=978.49 00:31:40.790 clat (usec): min=418, max=42066, avg=40494.71, stdev=4634.29 00:31:40.790 lat (usec): min=445, max=49875, avg=40629.15, stdev=4754.42 00:31:40.790 clat percentiles (usec): 00:31:40.790 | 1.00th=[ 420], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:40.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:40.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:40.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:40.790 | 99.99th=[42206] 00:31:40.790 bw ( KiB/s): min= 93, max= 104, per=1.36%, avg=98.17, stdev= 4.67, samples=6 00:31:40.790 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:31:40.790 lat (usec) : 500=1.28% 00:31:40.790 lat (msec) : 50=97.44% 00:31:40.790 cpu : usr=0.10%, sys=0.00%, ctx=81, majf=0, minf=1 00:31:40.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.790 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3716503: Wed Nov 20 10:48:41 2024 00:31:40.790 read: IOPS=24, BW=97.2KiB/s (99.6kB/s)(328KiB/3373msec) 00:31:40.790 slat (usec): min=13, max=14807, avg=344.86, stdev=2070.43 00:31:40.790 clat (usec): min=426, max=43804, avg=40518.44, stdev=4493.52 00:31:40.790 lat (usec): min=462, max=55931, avg=40867.21, stdev=4990.51 00:31:40.790 clat percentiles (usec): 00:31:40.790 | 1.00th=[ 429], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:40.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:40.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:40.790 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:31:40.790 | 99.99th=[43779] 00:31:40.790 bw ( KiB/s): min= 96, max= 104, per=1.36%, avg=98.00, stdev= 3.35, samples=6 00:31:40.790 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:31:40.790 lat (usec) : 500=1.20% 00:31:40.790 lat (msec) : 50=97.59% 00:31:40.790 cpu : usr=0.00%, sys=0.12%, ctx=86, majf=0, minf=1 00:31:40.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.790 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3716506: Wed Nov 20 10:48:41 2024 00:31:40.790 read: IOPS=1997, BW=7990KiB/s (8182kB/s)(22.9MiB/2930msec) 00:31:40.790 slat (nsec): min=7306, max=70500, avg=8651.59, stdev=2229.51 00:31:40.790 clat (usec): min=186, max=44983, avg=486.29, stdev=3105.38 00:31:40.790 lat (usec): min=194, max=45007, avg=494.94, stdev=3106.61 00:31:40.790 clat percentiles (usec): 00:31:40.790 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:31:40.790 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:31:40.790 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 260], 00:31:40.790 | 99.00th=[ 388], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:31:40.790 | 99.99th=[44827] 00:31:40.790 bw ( KiB/s): min= 96, max=15520, per=100.00%, avg=9348.80, stdev=8345.23, samples=5 00:31:40.790 iops : min= 24, max= 3880, avg=2337.20, stdev=2086.31, samples=5 00:31:40.790 lat (usec) : 250=68.33%, 500=30.95%, 750=0.10% 00:31:40.790 lat (msec) : 2=0.02%, 50=0.58% 00:31:40.790 cpu : usr=0.92%, sys=3.48%, ctx=5857, majf=0, minf=2 00:31:40.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 issued rwts: total=5854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.790 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3716507: Wed Nov 20 10:48:41 2024 00:31:40.790 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2729msec) 00:31:40.790 slat (nsec): min=11824, max=72072, avg=21124.91, stdev=10928.91 00:31:40.790 clat (usec): min=491, max=41918, avg=40391.58, stdev=4950.38 00:31:40.790 lat (usec): min=525, max=41943, avg=40412.70, stdev=4948.77 00:31:40.790 clat percentiles (usec): 00:31:40.790 | 1.00th=[ 490], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:40.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:40.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:40.790 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:40.790 | 99.99th=[41681] 00:31:40.790 bw ( KiB/s): min= 96, max= 104, per=1.37%, avg=99.20, stdev= 4.38, samples=5 00:31:40.790 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:31:40.790 lat (usec) : 500=1.47% 00:31:40.790 lat (msec) : 50=97.06% 00:31:40.790 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:31:40.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.790 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.790 00:31:40.790 Run status group 0 (all jobs): 00:31:40.790 READ: bw=7209KiB/s (7382kB/s), 97.2KiB/s-7990KiB/s (99.6kB/s-8182kB/s), io=23.7MiB (24.9MB), run=2729-3373msec 00:31:40.790 00:31:40.790 Disk stats (read/write): 00:31:40.790 nvme0n1: ios=108/0, merge=0/0, ticks=3541/0, in_queue=3541, util=99.85% 00:31:40.790 nvme0n2: ios=118/0, merge=0/0, ticks=4232/0, in_queue=4232, util=98.39% 00:31:40.790 nvme0n3: ios=5886/0, merge=0/0, ticks=3037/0, in_queue=3037, util=99.39% 00:31:40.790 nvme0n4: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.48% 00:31:41.050 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:41.050 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:41.308 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:41.308 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:41.308 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:41.308 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:41.567 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:41.567 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3716360 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:41.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:41.826 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:41.827 nvmf hotplug test: fio failed as expected 00:31:41.827 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.086 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.086 rmmod nvme_tcp 00:31:42.086 rmmod nvme_fabrics 00:31:42.086 rmmod nvme_keyring 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3713886 ']' 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3713886 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3713886 ']' 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3713886 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713886 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713886' 00:31:42.346 killing process with pid 3713886 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3713886 00:31:42.346 10:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3713886 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.346 10:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.883 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:44.883 00:31:44.883 real 0m25.864s 00:31:44.883 user 1m30.811s 00:31:44.883 sys 0m10.598s 00:31:44.883 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.883 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.883 ************************************ 00:31:44.883 END TEST nvmf_fio_target 00:31:44.883 ************************************ 00:31:44.883 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:44.883 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:44.884 ************************************ 00:31:44.884 START TEST nvmf_bdevio 00:31:44.884 ************************************ 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:44.884 * Looking for test storage... 00:31:44.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.884 --rc genhtml_branch_coverage=1 00:31:44.884 --rc genhtml_function_coverage=1 00:31:44.884 --rc genhtml_legend=1 00:31:44.884 --rc geninfo_all_blocks=1 00:31:44.884 --rc geninfo_unexecuted_blocks=1 00:31:44.884 00:31:44.884 ' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.884 --rc genhtml_branch_coverage=1 00:31:44.884 --rc genhtml_function_coverage=1 00:31:44.884 --rc genhtml_legend=1 00:31:44.884 --rc geninfo_all_blocks=1 00:31:44.884 --rc geninfo_unexecuted_blocks=1 00:31:44.884 00:31:44.884 ' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.884 --rc genhtml_branch_coverage=1 00:31:44.884 --rc genhtml_function_coverage=1 00:31:44.884 --rc genhtml_legend=1 00:31:44.884 --rc geninfo_all_blocks=1 00:31:44.884 --rc geninfo_unexecuted_blocks=1 00:31:44.884 00:31:44.884 ' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.884 --rc genhtml_branch_coverage=1 00:31:44.884 --rc genhtml_function_coverage=1 00:31:44.884 --rc genhtml_legend=1 00:31:44.884 --rc geninfo_all_blocks=1 00:31:44.884 --rc geninfo_unexecuted_blocks=1 00:31:44.884 00:31:44.884 ' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:44.885 10:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.453 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:51.454 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:51.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:51.454 Found net devices under 0000:86:00.0: cvl_0_0 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:51.454 Found net devices under 0000:86:00.1: cvl_0_1 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.454 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.454 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.454 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.454 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:31:51.455 00:31:51.455 --- 10.0.0.2 ping statistics --- 00:31:51.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.455 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:31:51.455 00:31:51.455 --- 10.0.0.1 ping statistics --- 00:31:51.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.455 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3720738 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3720738 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3720738 ']' 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.455 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.455 [2024-11-20 10:48:51.334217] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:51.455 [2024-11-20 10:48:51.335250] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:31:51.455 [2024-11-20 10:48:51.335307] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.455 [2024-11-20 10:48:51.417851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.455 [2024-11-20 10:48:51.461877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.455 [2024-11-20 10:48:51.461915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.455 [2024-11-20 10:48:51.461922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.455 [2024-11-20 10:48:51.461928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.455 [2024-11-20 10:48:51.461934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.455 [2024-11-20 10:48:51.463573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:51.455 [2024-11-20 10:48:51.463682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:51.455 [2024-11-20 10:48:51.463786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.455 [2024-11-20 10:48:51.463787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:51.455 [2024-11-20 10:48:51.531616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:51.455 [2024-11-20 10:48:51.531987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:51.455 [2024-11-20 10:48:51.532550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:51.455 [2024-11-20 10:48:51.532931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:51.455 [2024-11-20 10:48:51.532972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:51.455 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.455 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:51.455 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:51.455 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:51.455 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.714 [2024-11-20 10:48:52.216475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.714 Malloc0 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:51.714 [2024-11-20 10:48:52.296671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:51.714 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:51.715 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:51.715 { 00:31:51.715 "params": { 00:31:51.715 "name": "Nvme$subsystem", 00:31:51.715 "trtype": "$TEST_TRANSPORT", 00:31:51.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:51.715 "adrfam": "ipv4", 00:31:51.715 "trsvcid": "$NVMF_PORT", 00:31:51.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:51.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:51.715 "hdgst": ${hdgst:-false}, 00:31:51.715 "ddgst": ${ddgst:-false} 00:31:51.715 }, 00:31:51.715 "method": "bdev_nvme_attach_controller" 00:31:51.715 } 00:31:51.715 EOF 00:31:51.715 )") 00:31:51.715 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:51.715 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:51.715 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:51.715 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:51.715 "params": { 00:31:51.715 "name": "Nvme1", 00:31:51.715 "trtype": "tcp", 00:31:51.715 "traddr": "10.0.0.2", 00:31:51.715 "adrfam": "ipv4", 00:31:51.715 "trsvcid": "4420", 00:31:51.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:51.715 "hdgst": false, 00:31:51.715 "ddgst": false 00:31:51.715 }, 00:31:51.715 "method": "bdev_nvme_attach_controller" 00:31:51.715 }' 00:31:51.715 [2024-11-20 10:48:52.350316] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:31:51.715 [2024-11-20 10:48:52.350361] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720988 ] 00:31:51.715 [2024-11-20 10:48:52.427369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:51.973 [2024-11-20 10:48:52.471743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.973 [2024-11-20 10:48:52.471783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.973 [2024-11-20 10:48:52.471784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.231 I/O targets: 00:31:52.231 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:52.231 00:31:52.231 00:31:52.231 CUnit - A unit testing framework for C - Version 2.1-3 00:31:52.231 http://cunit.sourceforge.net/ 00:31:52.231 00:31:52.231 00:31:52.231 Suite: bdevio tests on: Nvme1n1 00:31:52.231 Test: blockdev write read block ...passed 00:31:52.231 Test: blockdev write zeroes read block ...passed 00:31:52.231 Test: blockdev write zeroes read no split ...passed 00:31:52.231 Test: blockdev write zeroes read split ...passed 00:31:52.231 Test: blockdev write zeroes read split partial ...passed 00:31:52.231 Test: blockdev reset ...[2024-11-20 10:48:52.930628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:52.231 [2024-11-20 10:48:52.930695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b36340 (9): Bad file descriptor 00:31:52.551 [2024-11-20 10:48:52.974820] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:52.551 passed 00:31:52.551 Test: blockdev write read 8 blocks ...passed 00:31:52.551 Test: blockdev write read size > 128k ...passed 00:31:52.551 Test: blockdev write read invalid size ...passed 00:31:52.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:52.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:52.551 Test: blockdev write read max offset ...passed 00:31:52.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:52.551 Test: blockdev writev readv 8 blocks ...passed 00:31:52.551 Test: blockdev writev readv 30 x 1block ...passed 00:31:52.551 Test: blockdev writev readv block ...passed 00:31:52.864 Test: blockdev writev readv size > 128k ...passed 00:31:52.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:52.864 Test: blockdev comparev and writev ...[2024-11-20 10:48:53.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.265972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.265987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.265995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.266296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.266307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.266320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.266327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.266618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.266630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.266643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.266651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.266940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.266956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.266968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:52.864 [2024-11-20 10:48:53.266976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:52.864 passed 00:31:52.864 Test: blockdev nvme passthru rw ...passed 00:31:52.864 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:48:53.350315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:52.864 [2024-11-20 10:48:53.350337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.350464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:52.864 [2024-11-20 10:48:53.350475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.350594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:52.864 [2024-11-20 10:48:53.350605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:52.864 [2024-11-20 10:48:53.350720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:52.864 [2024-11-20 10:48:53.350730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:52.864 passed 00:31:52.864 Test: blockdev nvme admin passthru ...passed 00:31:52.864 Test: blockdev copy ...passed 00:31:52.864 00:31:52.864 Run Summary: Type Total Ran Passed Failed Inactive 00:31:52.864 suites 1 1 n/a 0 0 00:31:52.864 tests 23 23 23 0 0 00:31:52.864 asserts 152 152 152 0 n/a 00:31:52.864 00:31:52.864 Elapsed time = 1.249 seconds 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.864 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.864 rmmod nvme_tcp 00:31:53.151 rmmod nvme_fabrics 00:31:53.151 rmmod nvme_keyring 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3720738 ']' 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3720738 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3720738 ']' 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3720738 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3720738 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3720738' 00:31:53.151 killing process with pid 3720738 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3720738 00:31:53.151 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3720738 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.410 10:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.316 10:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.316 00:31:55.316 real 0m10.743s 00:31:55.316 user 0m10.082s 00:31:55.316 sys 0m5.256s 00:31:55.316 10:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.316 10:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.316 ************************************ 00:31:55.316 END TEST nvmf_bdevio 00:31:55.316 ************************************ 00:31:55.316 10:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:55.316 00:31:55.316 real 4m32.294s 00:31:55.316 user 9m4.758s 00:31:55.316 sys 1m50.327s 00:31:55.316 10:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.316 10:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:55.316 ************************************ 00:31:55.316 END TEST nvmf_target_core_interrupt_mode 00:31:55.316 ************************************ 00:31:55.316 10:48:56 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:55.316 10:48:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:55.316 10:48:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.316 10:48:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 ************************************ 00:31:55.576 START TEST nvmf_interrupt 00:31:55.576 ************************************ 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:55.576 * Looking for test storage... 00:31:55.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:55.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.576 --rc genhtml_branch_coverage=1 00:31:55.576 --rc genhtml_function_coverage=1 00:31:55.576 --rc genhtml_legend=1 00:31:55.576 --rc geninfo_all_blocks=1 00:31:55.576 --rc geninfo_unexecuted_blocks=1 00:31:55.576 00:31:55.576 ' 00:31:55.576 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:55.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.576 --rc genhtml_branch_coverage=1 00:31:55.576 --rc genhtml_function_coverage=1 00:31:55.576 --rc genhtml_legend=1 00:31:55.576 --rc geninfo_all_blocks=1 00:31:55.576 --rc geninfo_unexecuted_blocks=1 00:31:55.576 00:31:55.577 ' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:55.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.577 --rc genhtml_branch_coverage=1 00:31:55.577 --rc genhtml_function_coverage=1 00:31:55.577 --rc genhtml_legend=1 00:31:55.577 --rc geninfo_all_blocks=1 00:31:55.577 --rc geninfo_unexecuted_blocks=1 00:31:55.577 00:31:55.577 ' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:55.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.577 --rc genhtml_branch_coverage=1 00:31:55.577 --rc genhtml_function_coverage=1 00:31:55.577 --rc genhtml_legend=1 00:31:55.577 --rc geninfo_all_blocks=1 00:31:55.577 --rc geninfo_unexecuted_blocks=1 00:31:55.577 00:31:55.577 ' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.577 10:48:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.146 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:02.147 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:02.147 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:02.147 Found net devices under 0000:86:00.0: cvl_0_0 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:02.147 Found net devices under 0000:86:00.1: cvl_0_1 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:02.147 10:49:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:02.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:32:02.147 00:32:02.147 --- 10.0.0.2 ping statistics --- 00:32:02.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.147 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:32:02.147 00:32:02.147 --- 10.0.0.1 ping statistics --- 00:32:02.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.147 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3724763 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3724763 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3724763 ']' 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.147 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.147 [2024-11-20 10:49:02.158434] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:02.147 [2024-11-20 10:49:02.159329] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:32:02.147 [2024-11-20 10:49:02.159359] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.147 [2024-11-20 10:49:02.222703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:02.147 [2024-11-20 10:49:02.265370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.147 [2024-11-20 10:49:02.265406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.147 [2024-11-20 10:49:02.265413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.147 [2024-11-20 10:49:02.265419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.147 [2024-11-20 10:49:02.265425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.147 [2024-11-20 10:49:02.266579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.148 [2024-11-20 10:49:02.266584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.148 [2024-11-20 10:49:02.334645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:02.148 [2024-11-20 10:49:02.335145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:02.148 [2024-11-20 10:49:02.335175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:02.148 5000+0 records in 00:32:02.148 5000+0 records out 00:32:02.148 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188705 s, 543 MB/s 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 AIO0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 [2024-11-20 10:49:02.483372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 [2024-11-20 10:49:02.523661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3724763 0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3724763 0 idle 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724763 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0' 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724763 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3724763 1 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3724763 1 idle 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:02.148 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724768 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724768 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3724804 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3724763 0 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3724763 0 busy 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:02.407 10:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724763 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.24 reactor_0' 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724763 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.24 reactor_0 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:02.407 10:49:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724763 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.53 reactor_0' 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724763 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.53 reactor_0 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3724763 1 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3724763 1 busy 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724768 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:01.32 reactor_1' 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724768 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:01.32 reactor_1 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:03.778 10:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3724804 00:32:13.762 Initializing NVMe Controllers 00:32:13.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.762 Controller IO queue size 256, less than required. 00:32:13.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:13.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:13.762 Initialization complete. Launching workers. 00:32:13.762 ======================================================== 00:32:13.762 Latency(us) 00:32:13.762 Device Information : IOPS MiB/s Average min max 00:32:13.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16557.40 64.68 15470.78 2875.68 55818.11 00:32:13.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16403.80 64.08 15611.37 7542.42 26785.02 00:32:13.762 ======================================================== 00:32:13.762 Total : 32961.20 128.75 15540.75 2875.68 55818.11 00:32:13.762 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3724763 0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3724763 0 idle 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724763 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.22 reactor_0' 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724763 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.22 reactor_0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3724763 1 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3724763 1 idle 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724768 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724768 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:13.762 10:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:13.762 10:49:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:13.762 10:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:13.762 10:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:13.762 10:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:13.762 10:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3724763 0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3724763 0 idle 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724763 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.46 reactor_0' 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724763 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.46 reactor_0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3724763 1 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3724763 1 idle 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3724763 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3724763 -w 256 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3724768 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.09 reactor_1' 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3724768 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.09 reactor_1 00:32:15.668 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.926 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:15.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.927 rmmod nvme_tcp 00:32:15.927 rmmod nvme_fabrics 00:32:15.927 rmmod nvme_keyring 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3724763 ']' 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3724763 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3724763 ']' 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3724763 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:15.927 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724763 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724763' 00:32:16.184 killing process with pid 3724763 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3724763 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3724763 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.184 10:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.718 10:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.718 00:32:18.718 real 0m22.924s 00:32:18.718 user 0m39.353s 00:32:18.718 sys 0m8.882s 00:32:18.718 10:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.718 10:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.718 ************************************ 00:32:18.718 END TEST nvmf_interrupt 00:32:18.718 ************************************ 00:32:18.718 00:32:18.718 real 27m31.171s 00:32:18.718 user 56m54.728s 00:32:18.718 sys 9m21.306s 00:32:18.718 10:49:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.718 10:49:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.718 ************************************ 00:32:18.718 END TEST nvmf_tcp 00:32:18.718 ************************************ 00:32:18.718 10:49:19 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:18.718 10:49:19 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:18.718 10:49:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:18.718 10:49:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.718 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:32:18.718 ************************************ 00:32:18.718 START TEST spdkcli_nvmf_tcp 00:32:18.718 ************************************ 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:18.718 * Looking for test storage... 00:32:18.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.718 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:18.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.719 --rc genhtml_branch_coverage=1 00:32:18.719 --rc genhtml_function_coverage=1 00:32:18.719 --rc genhtml_legend=1 00:32:18.719 --rc geninfo_all_blocks=1 00:32:18.719 --rc geninfo_unexecuted_blocks=1 00:32:18.719 00:32:18.719 ' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:18.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.719 --rc genhtml_branch_coverage=1 00:32:18.719 --rc genhtml_function_coverage=1 00:32:18.719 --rc genhtml_legend=1 00:32:18.719 --rc geninfo_all_blocks=1 00:32:18.719 --rc geninfo_unexecuted_blocks=1 00:32:18.719 00:32:18.719 ' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:18.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.719 --rc genhtml_branch_coverage=1 00:32:18.719 --rc genhtml_function_coverage=1 00:32:18.719 --rc genhtml_legend=1 00:32:18.719 --rc geninfo_all_blocks=1 00:32:18.719 --rc geninfo_unexecuted_blocks=1 00:32:18.719 00:32:18.719 ' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:18.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.719 --rc genhtml_branch_coverage=1 00:32:18.719 --rc genhtml_function_coverage=1 00:32:18.719 --rc genhtml_legend=1 00:32:18.719 --rc geninfo_all_blocks=1 00:32:18.719 --rc geninfo_unexecuted_blocks=1 00:32:18.719 00:32:18.719 ' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:18.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3727609 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3727609 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3727609 ']' 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.719 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.720 [2024-11-20 10:49:19.352188] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:32:18.720 [2024-11-20 10:49:19.352237] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727609 ] 00:32:18.720 [2024-11-20 10:49:19.426260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:18.978 [2024-11-20 10:49:19.472751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.978 [2024-11-20 10:49:19.472752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.978 10:49:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:18.978 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:18.978 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:18.978 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:18.978 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:18.978 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:18.978 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:18.978 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:18.978 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:18.978 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:18.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:18.978 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:18.978 ' 00:32:22.260 [2024-11-20 10:49:22.311017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.194 [2024-11-20 10:49:23.647535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:25.723 [2024-11-20 10:49:26.135281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:27.621 [2024-11-20 10:49:28.322054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:29.522 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:29.522 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:29.522 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:29.522 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:29.522 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:29.522 10:49:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.088 10:49:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:30.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:30.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:30.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:30.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:30.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:30.088 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:30.088 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:30.088 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:30.088 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:30.088 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:30.088 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:30.088 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:30.088 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:30.088 ' 00:32:36.644 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:36.644 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:36.644 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:36.644 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:36.644 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:36.644 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:36.644 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:36.644 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:36.644 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:36.644 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:36.644 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:36.644 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:36.644 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:36.644 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3727609 ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727609' 00:32:36.644 killing process with pid 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3727609 ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3727609 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3727609 ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3727609 00:32:36.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3727609) - No such process 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3727609 is not found' 00:32:36.644 Process with pid 3727609 is not found 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:36.644 00:32:36.644 real 0m17.377s 00:32:36.644 user 0m38.304s 00:32:36.644 sys 0m0.841s 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.644 10:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.644 ************************************ 00:32:36.644 END TEST spdkcli_nvmf_tcp 00:32:36.644 ************************************ 00:32:36.644 10:49:36 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:36.644 10:49:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:36.644 10:49:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.644 10:49:36 -- common/autotest_common.sh@10 -- # set +x 00:32:36.644 ************************************ 00:32:36.644 START TEST nvmf_identify_passthru 00:32:36.644 ************************************ 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:36.645 * Looking for test storage... 00:32:36.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:36.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.645 --rc genhtml_branch_coverage=1 00:32:36.645 --rc genhtml_function_coverage=1 00:32:36.645 --rc genhtml_legend=1 00:32:36.645 --rc geninfo_all_blocks=1 00:32:36.645 --rc geninfo_unexecuted_blocks=1 00:32:36.645 00:32:36.645 ' 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:36.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.645 --rc genhtml_branch_coverage=1 00:32:36.645 --rc genhtml_function_coverage=1 00:32:36.645 --rc genhtml_legend=1 00:32:36.645 --rc geninfo_all_blocks=1 00:32:36.645 --rc geninfo_unexecuted_blocks=1 00:32:36.645 00:32:36.645 ' 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:36.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.645 --rc genhtml_branch_coverage=1 00:32:36.645 --rc genhtml_function_coverage=1 00:32:36.645 --rc genhtml_legend=1 00:32:36.645 --rc geninfo_all_blocks=1 00:32:36.645 --rc geninfo_unexecuted_blocks=1 00:32:36.645 00:32:36.645 ' 00:32:36.645 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:36.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.645 --rc genhtml_branch_coverage=1 00:32:36.645 --rc genhtml_function_coverage=1 00:32:36.645 --rc genhtml_legend=1 00:32:36.645 --rc geninfo_all_blocks=1 00:32:36.645 --rc geninfo_unexecuted_blocks=1 00:32:36.645 00:32:36.645 ' 00:32:36.645 10:49:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:36.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.645 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.645 10:49:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.645 10:49:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.645 10:49:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.646 10:49:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.646 10:49:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:36.646 10:49:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.646 10:49:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.646 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:36.646 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.646 10:49:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.646 10:49:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:41.920 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:41.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:41.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:41.921 Found net devices under 0000:86:00.0: cvl_0_0 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:41.921 Found net devices under 0000:86:00.1: cvl_0_1 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:41.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:32:41.921 00:32:41.921 --- 10.0.0.2 ping statistics --- 00:32:41.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.921 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:32:41.921 00:32:41.921 --- 10.0.0.1 ping statistics --- 00:32:41.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.921 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:41.921 10:49:42 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:42.181 10:49:42 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:42.181 10:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:46.369 10:49:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:46.369 10:49:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:46.369 10:49:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:46.369 10:49:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3734825 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:50.558 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3734825 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3734825 ']' 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.558 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:50.558 [2024-11-20 10:49:51.138731] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:32:50.558 [2024-11-20 10:49:51.138781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.558 [2024-11-20 10:49:51.219967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.558 [2024-11-20 10:49:51.263849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.558 [2024-11-20 10:49:51.263888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.558 [2024-11-20 10:49:51.263895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.558 [2024-11-20 10:49:51.263901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.558 [2024-11-20 10:49:51.263906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.558 [2024-11-20 10:49:51.267970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.558 [2024-11-20 10:49:51.268001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.558 [2024-11-20 10:49:51.268110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.558 [2024-11-20 10:49:51.268110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.492 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.492 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:51.492 10:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:51.492 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.492 10:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.492 INFO: Log level set to 20 00:32:51.492 INFO: Requests: 00:32:51.492 { 00:32:51.492 "jsonrpc": "2.0", 00:32:51.492 "method": "nvmf_set_config", 00:32:51.492 "id": 1, 00:32:51.492 "params": { 00:32:51.492 "admin_cmd_passthru": { 00:32:51.492 "identify_ctrlr": true 00:32:51.492 } 00:32:51.492 } 00:32:51.492 } 00:32:51.492 00:32:51.492 INFO: response: 00:32:51.492 { 00:32:51.492 "jsonrpc": "2.0", 00:32:51.492 "id": 1, 00:32:51.492 "result": true 00:32:51.492 } 00:32:51.492 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.492 10:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.492 INFO: Setting log level to 20 00:32:51.492 INFO: Setting log level to 20 00:32:51.492 INFO: Log level set to 20 00:32:51.492 INFO: Log level set to 20 00:32:51.492 INFO: Requests: 00:32:51.492 { 00:32:51.492 "jsonrpc": "2.0", 00:32:51.492 "method": "framework_start_init", 00:32:51.492 "id": 1 00:32:51.492 } 00:32:51.492 00:32:51.492 INFO: Requests: 00:32:51.492 { 00:32:51.492 "jsonrpc": "2.0", 00:32:51.492 "method": "framework_start_init", 00:32:51.492 "id": 1 00:32:51.492 } 00:32:51.492 00:32:51.492 [2024-11-20 10:49:52.077164] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:51.492 INFO: response: 00:32:51.492 { 00:32:51.492 "jsonrpc": "2.0", 00:32:51.492 "id": 1, 00:32:51.492 "result": true 00:32:51.492 } 00:32:51.492 00:32:51.492 INFO: response: 00:32:51.492 { 00:32:51.492 "jsonrpc": "2.0", 00:32:51.492 "id": 1, 00:32:51.492 "result": true 00:32:51.492 } 00:32:51.492 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.492 10:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.492 INFO: Setting log level to 40 00:32:51.492 INFO: Setting log level to 40 00:32:51.492 INFO: Setting log level to 40 00:32:51.492 [2024-11-20 10:49:52.090500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.492 10:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:51.492 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.493 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.493 10:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:51.493 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.493 10:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.773 Nvme0n1 00:32:54.773 10:49:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.773 10:49:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:54.773 10:49:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.773 10:49:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.773 10:49:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.773 10:49:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:54.773 10:49:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.773 10:49:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.773 [2024-11-20 10:49:55.005373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.773 [ 00:32:54.773 { 00:32:54.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:54.773 "subtype": "Discovery", 00:32:54.773 "listen_addresses": [], 00:32:54.773 "allow_any_host": true, 00:32:54.773 "hosts": [] 00:32:54.773 }, 00:32:54.773 { 00:32:54.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:54.773 "subtype": "NVMe", 00:32:54.773 "listen_addresses": [ 00:32:54.773 { 00:32:54.773 "trtype": "TCP", 00:32:54.773 "adrfam": "IPv4", 00:32:54.773 "traddr": "10.0.0.2", 00:32:54.773 "trsvcid": "4420" 00:32:54.773 } 00:32:54.773 ], 00:32:54.773 "allow_any_host": true, 00:32:54.773 "hosts": [], 00:32:54.773 "serial_number": "SPDK00000000000001", 00:32:54.773 "model_number": "SPDK bdev Controller", 00:32:54.773 "max_namespaces": 1, 00:32:54.773 "min_cntlid": 1, 00:32:54.773 "max_cntlid": 65519, 00:32:54.773 "namespaces": [ 00:32:54.773 { 00:32:54.773 "nsid": 1, 00:32:54.773 "bdev_name": "Nvme0n1", 00:32:54.773 "name": "Nvme0n1", 00:32:54.773 "nguid": "BC4676D8ECE346989C2B62E617585ED3", 00:32:54.773 "uuid": "bc4676d8-ece3-4698-9c2b-62e617585ed3" 00:32:54.773 } 00:32:54.773 ] 00:32:54.773 } 00:32:54.773 ] 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:54.773 10:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.773 rmmod nvme_tcp 00:32:54.773 rmmod nvme_fabrics 00:32:54.773 rmmod nvme_keyring 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3734825 ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3734825 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3734825 ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3734825 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3734825 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:54.773 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3734825' 00:32:54.773 killing process with pid 3734825 00:32:54.774 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3734825 00:32:54.774 10:49:55 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3734825 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:56.147 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:56.424 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:56.424 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:56.424 10:49:56 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.424 10:49:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:56.425 10:49:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.331 10:49:58 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:58.331 00:32:58.331 real 0m22.407s 00:32:58.331 user 0m29.300s 00:32:58.331 sys 0m6.168s 00:32:58.331 10:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.331 10:49:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.331 ************************************ 00:32:58.331 END TEST nvmf_identify_passthru 00:32:58.331 ************************************ 00:32:58.331 10:49:58 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:58.331 10:49:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:58.331 10:49:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:58.331 10:49:58 -- common/autotest_common.sh@10 -- # set +x 00:32:58.331 ************************************ 00:32:58.331 START TEST nvmf_dif 00:32:58.331 ************************************ 00:32:58.331 10:49:59 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:58.590 * Looking for test storage... 00:32:58.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:58.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.590 --rc genhtml_branch_coverage=1 00:32:58.590 --rc genhtml_function_coverage=1 00:32:58.590 --rc genhtml_legend=1 00:32:58.590 --rc geninfo_all_blocks=1 00:32:58.590 --rc geninfo_unexecuted_blocks=1 00:32:58.590 00:32:58.590 ' 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:58.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.590 --rc genhtml_branch_coverage=1 00:32:58.590 --rc genhtml_function_coverage=1 00:32:58.590 --rc genhtml_legend=1 00:32:58.590 --rc geninfo_all_blocks=1 00:32:58.590 --rc geninfo_unexecuted_blocks=1 00:32:58.590 00:32:58.590 ' 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:58.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.590 --rc genhtml_branch_coverage=1 00:32:58.590 --rc genhtml_function_coverage=1 00:32:58.590 --rc genhtml_legend=1 00:32:58.590 --rc geninfo_all_blocks=1 00:32:58.590 --rc geninfo_unexecuted_blocks=1 00:32:58.590 00:32:58.590 ' 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:58.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.590 --rc genhtml_branch_coverage=1 00:32:58.590 --rc genhtml_function_coverage=1 00:32:58.590 --rc genhtml_legend=1 00:32:58.590 --rc geninfo_all_blocks=1 00:32:58.590 --rc geninfo_unexecuted_blocks=1 00:32:58.590 00:32:58.590 ' 00:32:58.590 10:49:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.590 10:49:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.590 10:49:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.590 10:49:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.590 10:49:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.590 10:49:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:58.590 10:49:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:58.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:58.590 10:49:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:58.590 10:49:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:58.590 10:49:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:58.590 10:49:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:58.590 10:49:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:58.590 10:49:59 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:58.590 10:49:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:05.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:05.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:05.257 Found net devices under 0000:86:00.0: cvl_0_0 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:05.257 Found net devices under 0000:86:00.1: cvl_0_1 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.257 10:50:04 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:33:05.257 00:33:05.257 --- 10.0.0.2 ping statistics --- 00:33:05.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.257 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:33:05.257 00:33:05.257 --- 10.0.0.1 ping statistics --- 00:33:05.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.257 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:05.257 10:50:05 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:05.258 10:50:05 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:07.264 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:07.264 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:07.264 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.264 10:50:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:07.264 10:50:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:07.264 10:50:07 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.264 10:50:07 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.264 10:50:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.524 10:50:07 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3740445 00:33:07.524 10:50:07 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:07.524 10:50:07 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3740445 00:33:07.524 10:50:07 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3740445 ']' 00:33:07.524 10:50:07 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.524 10:50:07 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.524 10:50:07 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.524 10:50:07 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.524 10:50:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.524 [2024-11-20 10:50:08.046644] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:33:07.524 [2024-11-20 10:50:08.046684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.524 [2024-11-20 10:50:08.121273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.524 [2024-11-20 10:50:08.173566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.524 [2024-11-20 10:50:08.173611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.524 [2024-11-20 10:50:08.173623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.524 [2024-11-20 10:50:08.173633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.524 [2024-11-20 10:50:08.173641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.524 [2024-11-20 10:50:08.174383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:07.784 10:50:08 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 10:50:08 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.784 10:50:08 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:07.784 10:50:08 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 [2024-11-20 10:50:08.322022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.784 10:50:08 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 ************************************ 00:33:07.784 START TEST fio_dif_1_default 00:33:07.784 ************************************ 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 bdev_null0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 [2024-11-20 10:50:08.394334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.784 { 00:33:07.784 "params": { 00:33:07.784 "name": "Nvme$subsystem", 00:33:07.784 "trtype": "$TEST_TRANSPORT", 00:33:07.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.784 "adrfam": "ipv4", 00:33:07.784 "trsvcid": "$NVMF_PORT", 00:33:07.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.784 "hdgst": ${hdgst:-false}, 00:33:07.784 "ddgst": ${ddgst:-false} 00:33:07.784 }, 00:33:07.784 "method": "bdev_nvme_attach_controller" 00:33:07.784 } 00:33:07.784 EOF 00:33:07.784 )") 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:07.784 "params": { 00:33:07.784 "name": "Nvme0", 00:33:07.784 "trtype": "tcp", 00:33:07.784 "traddr": "10.0.0.2", 00:33:07.784 "adrfam": "ipv4", 00:33:07.784 "trsvcid": "4420", 00:33:07.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:07.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:07.784 "hdgst": false, 00:33:07.784 "ddgst": false 00:33:07.784 }, 00:33:07.784 "method": "bdev_nvme_attach_controller" 00:33:07.784 }' 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:07.784 10:50:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:08.042 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:08.042 fio-3.35 00:33:08.042 Starting 1 thread 00:33:20.247 00:33:20.247 filename0: (groupid=0, jobs=1): err= 0: pid=3740814: Wed Nov 20 10:50:19 2024 00:33:20.247 read: IOPS=192, BW=771KiB/s (790kB/s)(7744KiB/10038msec) 00:33:20.247 slat (nsec): min=5997, max=34435, avg=6263.93, stdev=858.49 00:33:20.247 clat (usec): min=376, max=44660, avg=20721.89, stdev=20527.92 00:33:20.247 lat (usec): min=382, max=44695, avg=20728.15, stdev=20527.88 00:33:20.247 clat percentiles (usec): 00:33:20.247 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 420], 20.00th=[ 441], 00:33:20.247 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 594], 60.00th=[41157], 00:33:20.247 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:20.247 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:33:20.247 | 99.99th=[44827] 00:33:20.247 bw ( KiB/s): min= 672, max= 832, per=100.00%, avg=772.80, stdev=39.23, samples=20 00:33:20.247 iops : min= 168, max= 208, avg=193.20, stdev= 9.81, samples=20 00:33:20.247 lat (usec) : 500=43.70%, 750=6.92% 00:33:20.247 lat (msec) : 50=49.38% 00:33:20.247 cpu : usr=92.98%, sys=6.76%, ctx=9, majf=0, minf=0 00:33:20.247 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:20.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.247 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.247 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:20.247 00:33:20.247 Run status group 0 (all jobs): 00:33:20.247 READ: bw=771KiB/s (790kB/s), 771KiB/s-771KiB/s (790kB/s-790kB/s), io=7744KiB (7930kB), run=10038-10038msec 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 00:33:20.247 real 0m11.271s 00:33:20.247 user 0m15.925s 00:33:20.247 sys 0m0.999s 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 ************************************ 00:33:20.247 END TEST fio_dif_1_default 00:33:20.247 ************************************ 00:33:20.247 10:50:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:20.247 10:50:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:20.247 10:50:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 ************************************ 00:33:20.247 START TEST fio_dif_1_multi_subsystems 00:33:20.247 ************************************ 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 bdev_null0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 [2024-11-20 10:50:19.742198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 bdev_null1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.248 { 00:33:20.248 "params": { 00:33:20.248 "name": "Nvme$subsystem", 00:33:20.248 "trtype": "$TEST_TRANSPORT", 00:33:20.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.248 "adrfam": "ipv4", 00:33:20.248 "trsvcid": "$NVMF_PORT", 00:33:20.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.248 "hdgst": ${hdgst:-false}, 00:33:20.248 "ddgst": ${ddgst:-false} 00:33:20.248 }, 00:33:20.248 "method": "bdev_nvme_attach_controller" 00:33:20.248 } 00:33:20.248 EOF 00:33:20.248 )") 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.248 { 00:33:20.248 "params": { 00:33:20.248 "name": "Nvme$subsystem", 00:33:20.248 "trtype": "$TEST_TRANSPORT", 00:33:20.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.248 "adrfam": "ipv4", 00:33:20.248 "trsvcid": "$NVMF_PORT", 00:33:20.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.248 "hdgst": ${hdgst:-false}, 00:33:20.248 "ddgst": ${ddgst:-false} 00:33:20.248 }, 00:33:20.248 "method": "bdev_nvme_attach_controller" 00:33:20.248 } 00:33:20.248 EOF 00:33:20.248 )") 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:20.248 "params": { 00:33:20.248 "name": "Nvme0", 00:33:20.248 "trtype": "tcp", 00:33:20.248 "traddr": "10.0.0.2", 00:33:20.248 "adrfam": "ipv4", 00:33:20.248 "trsvcid": "4420", 00:33:20.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.248 "hdgst": false, 00:33:20.248 "ddgst": false 00:33:20.248 }, 00:33:20.248 "method": "bdev_nvme_attach_controller" 00:33:20.248 },{ 00:33:20.248 "params": { 00:33:20.248 "name": "Nvme1", 00:33:20.248 "trtype": "tcp", 00:33:20.248 "traddr": "10.0.0.2", 00:33:20.248 "adrfam": "ipv4", 00:33:20.248 "trsvcid": "4420", 00:33:20.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:20.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:20.248 "hdgst": false, 00:33:20.248 "ddgst": false 00:33:20.248 }, 00:33:20.248 "method": "bdev_nvme_attach_controller" 00:33:20.248 }' 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:20.248 10:50:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.248 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:20.248 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:20.248 fio-3.35 00:33:20.248 Starting 2 threads 00:33:30.235 00:33:30.235 filename0: (groupid=0, jobs=1): err= 0: pid=3742780: Wed Nov 20 10:50:30 2024 00:33:30.235 read: IOPS=102, BW=409KiB/s (419kB/s)(4096KiB/10006msec) 00:33:30.235 slat (nsec): min=6165, max=91241, avg=10684.31, stdev=7591.09 00:33:30.235 clat (usec): min=405, max=42509, avg=39050.61, stdev=9298.36 00:33:30.235 lat (usec): min=411, max=42517, avg=39061.30, stdev=9298.34 00:33:30.235 clat percentiles (usec): 00:33:30.235 | 1.00th=[ 429], 5.00th=[ 474], 10.00th=[41157], 20.00th=[41157], 00:33:30.235 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:30.235 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:30.235 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:30.235 | 99.99th=[42730] 00:33:30.235 bw ( KiB/s): min= 352, max= 512, per=34.99%, avg=409.26, stdev=39.31, samples=19 00:33:30.235 iops : min= 88, max= 128, avg=102.32, stdev= 9.83, samples=19 00:33:30.235 lat (usec) : 500=5.08%, 750=0.39% 00:33:30.235 lat (msec) : 50=94.53% 00:33:30.235 cpu : usr=98.91%, sys=0.79%, ctx=32, majf=0, minf=128 00:33:30.235 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.235 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.235 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:30.236 filename1: (groupid=0, jobs=1): err= 0: pid=3742781: Wed Nov 20 10:50:30 2024 00:33:30.236 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:33:30.236 slat (nsec): min=6146, max=67921, avg=9114.04, stdev=6199.81 00:33:30.236 clat (usec): min=387, max=42568, avg=21030.61, stdev=20518.10 00:33:30.236 lat (usec): min=394, max=42599, avg=21039.73, stdev=20516.05 00:33:30.236 clat percentiles (usec): 00:33:30.236 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 424], 20.00th=[ 482], 00:33:30.236 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[40633], 60.00th=[41681], 00:33:30.236 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:30.236 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:30.236 | 99.99th=[42730] 00:33:30.236 bw ( KiB/s): min= 704, max= 832, per=65.10%, avg=761.26, stdev=29.37, samples=19 00:33:30.236 iops : min= 176, max= 208, avg=190.32, stdev= 7.34, samples=19 00:33:30.236 lat (usec) : 500=40.42%, 750=9.26%, 1000=0.21% 00:33:30.236 lat (msec) : 50=50.11% 00:33:30.236 cpu : usr=97.32%, sys=2.41%, ctx=13, majf=0, minf=123 00:33:30.236 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.236 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.236 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:30.236 00:33:30.236 Run status group 0 (all jobs): 00:33:30.236 READ: bw=1169KiB/s (1197kB/s), 409KiB/s-760KiB/s (419kB/s-778kB/s), io=11.4MiB (12.0MB), run=10003-10006msec 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 00:33:30.495 real 0m11.314s 00:33:30.495 user 0m26.213s 00:33:30.495 sys 0m0.661s 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 ************************************ 00:33:30.495 END TEST fio_dif_1_multi_subsystems 00:33:30.495 ************************************ 00:33:30.495 10:50:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:30.495 10:50:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:30.495 10:50:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 ************************************ 00:33:30.495 START TEST fio_dif_rand_params 00:33:30.495 ************************************ 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 bdev_null0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.495 [2024-11-20 10:50:31.135851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.495 { 00:33:30.495 "params": { 00:33:30.495 "name": "Nvme$subsystem", 00:33:30.495 "trtype": "$TEST_TRANSPORT", 00:33:30.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.495 "adrfam": "ipv4", 00:33:30.495 "trsvcid": "$NVMF_PORT", 00:33:30.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.495 "hdgst": ${hdgst:-false}, 00:33:30.495 "ddgst": ${ddgst:-false} 00:33:30.495 }, 00:33:30.495 "method": "bdev_nvme_attach_controller" 00:33:30.495 } 00:33:30.495 EOF 00:33:30.495 )") 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:30.495 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.496 "params": { 00:33:30.496 "name": "Nvme0", 00:33:30.496 "trtype": "tcp", 00:33:30.496 "traddr": "10.0.0.2", 00:33:30.496 "adrfam": "ipv4", 00:33:30.496 "trsvcid": "4420", 00:33:30.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:30.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:30.496 "hdgst": false, 00:33:30.496 "ddgst": false 00:33:30.496 }, 00:33:30.496 "method": "bdev_nvme_attach_controller" 00:33:30.496 }' 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:30.496 10:50:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.072 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:31.073 ... 00:33:31.073 fio-3.35 00:33:31.073 Starting 3 threads 00:33:36.334 00:33:36.334 filename0: (groupid=0, jobs=1): err= 0: pid=3744659: Wed Nov 20 10:50:37 2024 00:33:36.334 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5043msec) 00:33:36.334 slat (nsec): min=6233, max=22658, avg=10908.21, stdev=1572.05 00:33:36.334 clat (usec): min=4481, max=49602, avg=9621.53, stdev=3887.12 00:33:36.334 lat (usec): min=4487, max=49620, avg=9632.44, stdev=3887.16 00:33:36.334 clat percentiles (usec): 00:33:36.334 | 1.00th=[ 5932], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 8094], 00:33:36.334 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:36.334 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:33:36.334 | 99.00th=[13698], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:33:36.334 | 99.99th=[49546] 00:33:36.334 bw ( KiB/s): min=29440, max=43776, per=33.42%, avg=40038.40, stdev=4142.32, samples=10 00:33:36.334 iops : min= 230, max= 342, avg=312.80, stdev=32.36, samples=10 00:33:36.334 lat (msec) : 10=68.45%, 20=30.65%, 50=0.89% 00:33:36.334 cpu : usr=94.49%, sys=5.22%, ctx=7, majf=0, minf=36 00:33:36.334 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.334 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:36.334 filename0: (groupid=0, jobs=1): err= 0: pid=3744660: Wed Nov 20 10:50:37 2024 00:33:36.334 read: IOPS=335, BW=41.9MiB/s (44.0MB/s)(212MiB/5045msec) 00:33:36.334 slat (nsec): min=6240, max=25475, avg=10761.33, stdev=1711.24 00:33:36.334 clat (usec): min=4325, max=50306, avg=8906.63, stdev=3856.24 00:33:36.334 lat (usec): min=4335, max=50327, avg=8917.39, stdev=3856.27 00:33:36.334 clat percentiles (usec): 00:33:36.334 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7701], 00:33:36.334 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:33:36.334 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[10552], 00:33:36.334 | 99.00th=[11994], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:33:36.334 | 99.99th=[50070] 00:33:36.334 bw ( KiB/s): min=38656, max=46848, per=36.11%, avg=43264.00, stdev=2806.93, samples=10 00:33:36.334 iops : min= 302, max= 366, avg=338.00, stdev=21.93, samples=10 00:33:36.334 lat (msec) : 10=88.24%, 20=10.93%, 50=0.65%, 100=0.18% 00:33:36.334 cpu : usr=94.17%, sys=5.55%, ctx=12, majf=0, minf=57 00:33:36.334 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.334 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:36.334 filename0: (groupid=0, jobs=1): err= 0: pid=3744661: Wed Nov 20 10:50:37 2024 00:33:36.334 read: IOPS=292, BW=36.6MiB/s (38.4MB/s)(183MiB/5002msec) 00:33:36.334 slat (nsec): min=6217, max=22725, avg=10690.38, stdev=1660.27 00:33:36.334 clat (usec): min=3957, max=50833, avg=10236.42, stdev=4418.84 00:33:36.334 lat (usec): min=3965, max=50846, avg=10247.11, stdev=4418.64 00:33:36.334 clat percentiles (usec): 00:33:36.334 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 8586], 00:33:36.334 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:33:36.334 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:33:36.334 | 99.00th=[44827], 99.50th=[48497], 99.90th=[49546], 99.95th=[50594], 00:33:36.334 | 99.99th=[50594] 00:33:36.334 bw ( KiB/s): min=29440, max=42752, per=31.24%, avg=37427.20, stdev=4054.55, samples=10 00:33:36.334 iops : min= 230, max= 334, avg=292.40, stdev=31.68, samples=10 00:33:36.334 lat (msec) : 4=0.41%, 10=51.16%, 20=47.20%, 50=1.16%, 100=0.07% 00:33:36.334 cpu : usr=94.16%, sys=5.56%, ctx=10, majf=0, minf=50 00:33:36.334 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.334 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:36.334 00:33:36.334 Run status group 0 (all jobs): 00:33:36.334 READ: bw=117MiB/s (123MB/s), 36.6MiB/s-41.9MiB/s (38.4MB/s-44.0MB/s), io=590MiB (619MB), run=5002-5045msec 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 bdev_null0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 [2024-11-20 10:50:37.250451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.592 bdev_null1 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.592 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 bdev_null2 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.593 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.851 { 00:33:36.851 "params": { 00:33:36.851 "name": "Nvme$subsystem", 00:33:36.851 "trtype": "$TEST_TRANSPORT", 00:33:36.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.851 "adrfam": "ipv4", 00:33:36.851 "trsvcid": "$NVMF_PORT", 00:33:36.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.851 "hdgst": ${hdgst:-false}, 00:33:36.851 "ddgst": ${ddgst:-false} 00:33:36.851 }, 00:33:36.851 "method": "bdev_nvme_attach_controller" 00:33:36.851 } 00:33:36.851 EOF 00:33:36.851 )") 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.851 { 00:33:36.851 "params": { 00:33:36.851 "name": "Nvme$subsystem", 00:33:36.851 "trtype": "$TEST_TRANSPORT", 00:33:36.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.851 "adrfam": "ipv4", 00:33:36.851 "trsvcid": "$NVMF_PORT", 00:33:36.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.851 "hdgst": ${hdgst:-false}, 00:33:36.851 "ddgst": ${ddgst:-false} 00:33:36.851 }, 00:33:36.851 "method": "bdev_nvme_attach_controller" 00:33:36.851 } 00:33:36.851 EOF 00:33:36.851 )") 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.851 { 00:33:36.851 "params": { 00:33:36.851 "name": "Nvme$subsystem", 00:33:36.851 "trtype": "$TEST_TRANSPORT", 00:33:36.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.851 "adrfam": "ipv4", 00:33:36.851 "trsvcid": "$NVMF_PORT", 00:33:36.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.851 "hdgst": ${hdgst:-false}, 00:33:36.851 "ddgst": ${ddgst:-false} 00:33:36.851 }, 00:33:36.851 "method": "bdev_nvme_attach_controller" 00:33:36.851 } 00:33:36.851 EOF 00:33:36.851 )") 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:36.851 10:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.851 "params": { 00:33:36.851 "name": "Nvme0", 00:33:36.851 "trtype": "tcp", 00:33:36.851 "traddr": "10.0.0.2", 00:33:36.851 "adrfam": "ipv4", 00:33:36.851 "trsvcid": "4420", 00:33:36.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.852 "hdgst": false, 00:33:36.852 "ddgst": false 00:33:36.852 }, 00:33:36.852 "method": "bdev_nvme_attach_controller" 00:33:36.852 },{ 00:33:36.852 "params": { 00:33:36.852 "name": "Nvme1", 00:33:36.852 "trtype": "tcp", 00:33:36.852 "traddr": "10.0.0.2", 00:33:36.852 "adrfam": "ipv4", 00:33:36.852 "trsvcid": "4420", 00:33:36.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.852 "hdgst": false, 00:33:36.852 "ddgst": false 00:33:36.852 }, 00:33:36.852 "method": "bdev_nvme_attach_controller" 00:33:36.852 },{ 00:33:36.852 "params": { 00:33:36.852 "name": "Nvme2", 00:33:36.852 "trtype": "tcp", 00:33:36.852 "traddr": "10.0.0.2", 00:33:36.852 "adrfam": "ipv4", 00:33:36.852 "trsvcid": "4420", 00:33:36.852 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:36.852 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:36.852 "hdgst": false, 00:33:36.852 "ddgst": false 00:33:36.852 }, 00:33:36.852 "method": "bdev_nvme_attach_controller" 00:33:36.852 }' 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:36.852 10:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.109 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:37.109 ... 00:33:37.109 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:37.109 ... 00:33:37.109 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:37.109 ... 00:33:37.109 fio-3.35 00:33:37.109 Starting 24 threads 00:33:49.309 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745799: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10016msec) 00:33:49.309 slat (nsec): min=8174, max=71048, avg=30935.54, stdev=9366.65 00:33:49.309 clat (usec): min=16691, max=36247, avg=27797.35, stdev=845.07 00:33:49.309 lat (usec): min=16734, max=36274, avg=27828.29, stdev=844.01 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.309 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:49.309 | 99.00th=[28967], 99.50th=[29492], 99.90th=[35914], 99.95th=[36439], 00:33:49.309 | 99.99th=[36439] 00:33:49.309 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:49.309 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:49.309 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.309 cpu : usr=98.49%, sys=1.15%, ctx=14, majf=0, minf=9 00:33:49.309 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745800: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=574, BW=2298KiB/s (2354kB/s)(22.5MiB/10007msec) 00:33:49.309 slat (nsec): min=6726, max=94950, avg=32973.66, stdev=15807.27 00:33:49.309 clat (usec): min=8304, max=62294, avg=27594.92, stdev=3137.32 00:33:49.309 lat (usec): min=8332, max=62334, avg=27627.89, stdev=3139.19 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[15533], 5.00th=[23462], 10.00th=[27395], 20.00th=[27657], 00:33:49.309 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:33:49.309 | 99.00th=[35914], 99.50th=[41157], 99.90th=[62129], 99.95th=[62129], 00:33:49.309 | 99.99th=[62129] 00:33:49.309 bw ( KiB/s): min= 2148, max= 2496, per=4.18%, avg=2293.30, stdev=81.05, samples=20 00:33:49.309 iops : min= 537, max= 624, avg=573.30, stdev=20.22, samples=20 00:33:49.309 lat (msec) : 10=0.28%, 20=1.50%, 50=97.95%, 100=0.28% 00:33:49.309 cpu : usr=98.64%, sys=0.99%, ctx=14, majf=0, minf=10 00:33:49.309 IO depths : 1=0.9%, 2=6.2%, 4=21.3%, 8=59.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745801: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=578, BW=2313KiB/s (2369kB/s)(22.6MiB/10016msec) 00:33:49.309 slat (nsec): min=6892, max=76342, avg=24575.20, stdev=15895.55 00:33:49.309 clat (usec): min=3870, max=30506, avg=27485.94, stdev=2850.48 00:33:49.309 lat (usec): min=3885, max=30525, avg=27510.51, stdev=2850.81 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[ 6849], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:49.309 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:49.309 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30278], 99.95th=[30540], 00:33:49.309 | 99.99th=[30540] 00:33:49.309 bw ( KiB/s): min= 2176, max= 2949, per=4.21%, avg=2310.65, stdev=159.06, samples=20 00:33:49.309 iops : min= 544, max= 737, avg=577.65, stdev=39.71, samples=20 00:33:49.309 lat (msec) : 4=0.14%, 10=1.24%, 20=0.83%, 50=97.79% 00:33:49.309 cpu : usr=98.54%, sys=1.06%, ctx=14, majf=0, minf=9 00:33:49.309 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745802: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10006msec) 00:33:49.309 slat (nsec): min=7202, max=60370, avg=18090.76, stdev=9883.42 00:33:49.309 clat (usec): min=15241, max=29563, avg=27891.53, stdev=725.15 00:33:49.309 lat (usec): min=15249, max=29599, avg=27909.62, stdev=723.63 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:49.309 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:49.309 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:33:49.309 | 99.99th=[29492] 00:33:49.309 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:49.309 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:49.309 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.309 cpu : usr=98.63%, sys=1.01%, ctx=15, majf=0, minf=9 00:33:49.309 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745803: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=579, BW=2319KiB/s (2374kB/s)(22.7MiB/10016msec) 00:33:49.309 slat (nsec): min=6844, max=75310, avg=14689.50, stdev=8524.28 00:33:49.309 clat (usec): min=1857, max=30558, avg=27480.13, stdev=3131.89 00:33:49.309 lat (usec): min=1884, max=30571, avg=27494.82, stdev=3131.34 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[ 5604], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:49.309 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:49.309 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30540], 99.95th=[30540], 00:33:49.309 | 99.99th=[30540] 00:33:49.309 bw ( KiB/s): min= 2176, max= 3056, per=4.22%, avg=2316.00, stdev=181.83, samples=20 00:33:49.309 iops : min= 544, max= 764, avg=579.00, stdev=45.46, samples=20 00:33:49.309 lat (msec) : 2=0.10%, 4=0.19%, 10=1.33%, 20=0.83%, 50=97.55% 00:33:49.309 cpu : usr=98.34%, sys=1.29%, ctx=14, majf=0, minf=9 00:33:49.309 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745804: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:33:49.309 slat (nsec): min=6175, max=81093, avg=38017.86, stdev=18332.54 00:33:49.309 clat (usec): min=15856, max=30685, avg=27688.10, stdev=814.34 00:33:49.309 lat (usec): min=15863, max=30704, avg=27726.12, stdev=815.01 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:49.309 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.309 | 99.00th=[28967], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:33:49.309 | 99.99th=[30802] 00:33:49.309 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.26, stdev=53.20, samples=19 00:33:49.309 iops : min= 544, max= 576, avg=569.32, stdev=13.30, samples=19 00:33:49.309 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.309 cpu : usr=98.58%, sys=1.05%, ctx=11, majf=0, minf=9 00:33:49.309 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.309 filename0: (groupid=0, jobs=1): err= 0: pid=3745805: Wed Nov 20 10:50:48 2024 00:33:49.309 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:33:49.309 slat (nsec): min=8404, max=80464, avg=33403.77, stdev=15581.34 00:33:49.309 clat (usec): min=10107, max=30520, avg=27640.66, stdev=1448.90 00:33:49.309 lat (usec): min=10128, max=30535, avg=27674.07, stdev=1449.65 00:33:49.309 clat percentiles (usec): 00:33:49.309 | 1.00th=[18744], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.309 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.309 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.309 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30278], 99.95th=[30540], 00:33:49.309 | 99.99th=[30540] 00:33:49.309 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2290.53, stdev=58.73, samples=19 00:33:49.309 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:49.309 lat (msec) : 20=1.12%, 50=98.88% 00:33:49.309 cpu : usr=98.61%, sys=0.73%, ctx=138, majf=0, minf=9 00:33:49.309 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.309 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename0: (groupid=0, jobs=1): err= 0: pid=3745806: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=579, BW=2319KiB/s (2374kB/s)(22.7MiB/10020msec) 00:33:49.310 slat (nsec): min=6612, max=74620, avg=13166.66, stdev=6939.21 00:33:49.310 clat (usec): min=3262, max=30429, avg=27490.67, stdev=3110.51 00:33:49.310 lat (usec): min=3276, max=30442, avg=27503.84, stdev=3109.74 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[ 5538], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:49.310 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:49.310 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30278], 99.95th=[30540], 00:33:49.310 | 99.99th=[30540] 00:33:49.310 bw ( KiB/s): min= 2176, max= 2944, per=4.22%, avg=2316.80, stdev=154.83, samples=20 00:33:49.310 iops : min= 544, max= 736, avg=579.20, stdev=38.71, samples=20 00:33:49.310 lat (msec) : 4=0.48%, 10=1.02%, 20=0.98%, 50=97.52% 00:33:49.310 cpu : usr=98.73%, sys=0.87%, ctx=14, majf=0, minf=9 00:33:49.310 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename1: (groupid=0, jobs=1): err= 0: pid=3745807: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10021msec) 00:33:49.310 slat (nsec): min=6823, max=63425, avg=15910.99, stdev=9047.30 00:33:49.310 clat (usec): min=9183, max=29471, avg=27788.44, stdev=1535.13 00:33:49.310 lat (usec): min=9197, max=29486, avg=27804.35, stdev=1534.70 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[18744], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:49.310 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:49.310 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:49.310 | 99.99th=[29492] 00:33:49.310 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2291.20, stdev=57.24, samples=20 00:33:49.310 iops : min= 544, max= 608, avg=572.80, stdev=14.31, samples=20 00:33:49.310 lat (msec) : 10=0.24%, 20=0.87%, 50=98.89% 00:33:49.310 cpu : usr=98.31%, sys=1.32%, ctx=14, majf=0, minf=9 00:33:49.310 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename1: (groupid=0, jobs=1): err= 0: pid=3745808: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10011msec) 00:33:49.310 slat (nsec): min=6022, max=68129, avg=31713.36, stdev=9645.71 00:33:49.310 clat (usec): min=16662, max=30369, avg=27770.42, stdev=730.91 00:33:49.310 lat (usec): min=16704, max=30386, avg=27802.14, stdev=730.50 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.310 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.310 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30278], 99.95th=[30278], 00:33:49.310 | 99.99th=[30278] 00:33:49.310 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:49.310 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:49.310 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.310 cpu : usr=98.58%, sys=1.05%, ctx=13, majf=0, minf=9 00:33:49.310 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename1: (groupid=0, jobs=1): err= 0: pid=3745809: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:33:49.310 slat (nsec): min=7078, max=93375, avg=31211.86, stdev=8486.02 00:33:49.310 clat (usec): min=8436, max=45594, avg=27760.42, stdev=1551.42 00:33:49.310 lat (usec): min=8459, max=45606, avg=27791.63, stdev=1551.25 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.310 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.310 | 99.00th=[28967], 99.50th=[29230], 99.90th=[45351], 99.95th=[45351], 00:33:49.310 | 99.99th=[45351] 00:33:49.310 bw ( KiB/s): min= 2176, max= 2412, per=4.15%, avg=2277.60, stdev=64.33, samples=20 00:33:49.310 iops : min= 544, max= 603, avg=569.40, stdev=16.08, samples=20 00:33:49.310 lat (msec) : 10=0.28%, 20=0.28%, 50=99.44% 00:33:49.310 cpu : usr=98.53%, sys=1.10%, ctx=13, majf=0, minf=9 00:33:49.310 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename1: (groupid=0, jobs=1): err= 0: pid=3745810: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=570, BW=2281KiB/s (2335kB/s)(22.3MiB/10018msec) 00:33:49.310 slat (nsec): min=5331, max=33229, avg=12415.88, stdev=4224.53 00:33:49.310 clat (usec): min=16781, max=39092, avg=27952.44, stdev=2342.27 00:33:49.310 lat (usec): min=16790, max=39100, avg=27964.85, stdev=2342.10 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[18482], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:49.310 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:49.310 | 99.00th=[37487], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:33:49.310 | 99.99th=[39060] 00:33:49.310 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=49.76, samples=19 00:33:49.310 iops : min= 544, max= 576, avg=569.26, stdev=12.44, samples=19 00:33:49.310 lat (msec) : 20=2.92%, 50=97.08% 00:33:49.310 cpu : usr=98.66%, sys=0.97%, ctx=13, majf=0, minf=9 00:33:49.310 IO depths : 1=2.5%, 2=8.7%, 4=24.7%, 8=54.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename1: (groupid=0, jobs=1): err= 0: pid=3745811: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=576, BW=2306KiB/s (2361kB/s)(22.6MiB/10022msec) 00:33:49.310 slat (nsec): min=6795, max=40541, avg=13116.12, stdev=4508.40 00:33:49.310 clat (usec): min=10199, max=47320, avg=27640.03, stdev=2847.75 00:33:49.310 lat (usec): min=10222, max=47337, avg=27653.14, stdev=2847.85 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[15139], 5.00th=[22676], 10.00th=[27657], 20.00th=[27919], 00:33:49.310 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:49.310 | 99.00th=[37487], 99.50th=[39060], 99.90th=[47449], 99.95th=[47449], 00:33:49.310 | 99.99th=[47449] 00:33:49.310 bw ( KiB/s): min= 2192, max= 2576, per=4.20%, avg=2304.80, stdev=82.32, samples=20 00:33:49.310 iops : min= 548, max= 644, avg=576.20, stdev=20.58, samples=20 00:33:49.310 lat (msec) : 20=3.57%, 50=96.43% 00:33:49.310 cpu : usr=98.64%, sys=0.99%, ctx=12, majf=0, minf=9 00:33:49.310 IO depths : 1=1.3%, 2=7.3%, 4=24.1%, 8=56.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.310 issued rwts: total=5778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.310 filename1: (groupid=0, jobs=1): err= 0: pid=3745812: Wed Nov 20 10:50:48 2024 00:33:49.310 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:33:49.310 slat (nsec): min=7095, max=76426, avg=30427.64, stdev=10102.65 00:33:49.310 clat (usec): min=8949, max=44700, avg=27740.85, stdev=1506.68 00:33:49.310 lat (usec): min=8974, max=44713, avg=27771.28, stdev=1506.87 00:33:49.310 clat percentiles (usec): 00:33:49.310 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.310 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.310 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:49.310 | 99.00th=[28967], 99.50th=[29230], 99.90th=[44827], 99.95th=[44827], 00:33:49.310 | 99.99th=[44827] 00:33:49.310 bw ( KiB/s): min= 2176, max= 2422, per=4.15%, avg=2277.90, stdev=65.78, samples=20 00:33:49.310 iops : min= 544, max= 605, avg=569.45, stdev=16.39, samples=20 00:33:49.310 lat (msec) : 10=0.28%, 20=0.28%, 50=99.44% 00:33:49.310 cpu : usr=98.64%, sys=0.99%, ctx=14, majf=0, minf=9 00:33:49.310 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename1: (groupid=0, jobs=1): err= 0: pid=3745813: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=574, BW=2299KiB/s (2354kB/s)(22.5MiB/10009msec) 00:33:49.311 slat (nsec): min=6758, max=81236, avg=37057.98, stdev=19307.55 00:33:49.311 clat (usec): min=4373, max=62948, avg=27512.94, stdev=3020.20 00:33:49.311 lat (usec): min=4381, max=62962, avg=27549.99, stdev=3021.37 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[15139], 5.00th=[26608], 10.00th=[27395], 20.00th=[27395], 00:33:49.311 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.311 | 99.00th=[35914], 99.50th=[37487], 99.90th=[62653], 99.95th=[63177], 00:33:49.311 | 99.99th=[63177] 00:33:49.311 bw ( KiB/s): min= 2160, max= 2422, per=4.18%, avg=2293.90, stdev=73.82, samples=20 00:33:49.311 iops : min= 540, max= 605, avg=573.45, stdev=18.41, samples=20 00:33:49.311 lat (msec) : 10=0.31%, 20=1.50%, 50=97.91%, 100=0.28% 00:33:49.311 cpu : usr=98.66%, sys=0.97%, ctx=8, majf=0, minf=9 00:33:49.311 IO depths : 1=5.5%, 2=11.0%, 4=22.6%, 8=53.5%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename1: (groupid=0, jobs=1): err= 0: pid=3745814: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10009msec) 00:33:49.311 slat (nsec): min=4497, max=61001, avg=30454.67, stdev=8896.32 00:33:49.311 clat (usec): min=8921, max=45652, avg=27758.17, stdev=1496.14 00:33:49.311 lat (usec): min=8941, max=45667, avg=27788.63, stdev=1496.10 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:49.311 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.311 | 99.00th=[28967], 99.50th=[29230], 99.90th=[45351], 99.95th=[45876], 00:33:49.311 | 99.99th=[45876] 00:33:49.311 bw ( KiB/s): min= 2176, max= 2412, per=4.15%, avg=2277.60, stdev=64.33, samples=20 00:33:49.311 iops : min= 544, max= 603, avg=569.40, stdev=16.08, samples=20 00:33:49.311 lat (msec) : 10=0.25%, 20=0.28%, 50=99.47% 00:33:49.311 cpu : usr=98.56%, sys=1.08%, ctx=14, majf=0, minf=9 00:33:49.311 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename2: (groupid=0, jobs=1): err= 0: pid=3745815: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:33:49.311 slat (nsec): min=7021, max=75922, avg=30461.77, stdev=17853.15 00:33:49.311 clat (usec): min=10109, max=30438, avg=27717.79, stdev=1459.84 00:33:49.311 lat (usec): min=10129, max=30453, avg=27748.25, stdev=1458.77 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[18744], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.311 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:49.311 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30278], 99.95th=[30278], 00:33:49.311 | 99.99th=[30540] 00:33:49.311 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2290.53, stdev=58.73, samples=19 00:33:49.311 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:49.311 lat (msec) : 20=1.12%, 50=98.88% 00:33:49.311 cpu : usr=98.52%, sys=1.12%, ctx=12, majf=0, minf=9 00:33:49.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename2: (groupid=0, jobs=1): err= 0: pid=3745816: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=570, BW=2281KiB/s (2335kB/s)(22.3MiB/10010msec) 00:33:49.311 slat (nsec): min=4407, max=59599, avg=31414.11, stdev=9377.48 00:33:49.311 clat (usec): min=9161, max=45793, avg=27761.75, stdev=1436.38 00:33:49.311 lat (usec): min=9179, max=45807, avg=27793.17, stdev=1436.11 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.311 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.311 | 99.00th=[28967], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:33:49.311 | 99.99th=[45876] 00:33:49.311 bw ( KiB/s): min= 2176, max= 2412, per=4.15%, avg=2277.60, stdev=64.33, samples=20 00:33:49.311 iops : min= 544, max= 603, avg=569.40, stdev=16.08, samples=20 00:33:49.311 lat (msec) : 10=0.19%, 20=0.28%, 50=99.53% 00:33:49.311 cpu : usr=98.57%, sys=1.07%, ctx=13, majf=0, minf=9 00:33:49.311 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename2: (groupid=0, jobs=1): err= 0: pid=3745817: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:33:49.311 slat (nsec): min=4393, max=84251, avg=30113.21, stdev=10406.11 00:33:49.311 clat (usec): min=8970, max=45345, avg=27742.95, stdev=1526.50 00:33:49.311 lat (usec): min=8994, max=45357, avg=27773.06, stdev=1526.59 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.311 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:49.311 | 99.00th=[28967], 99.50th=[29230], 99.90th=[45351], 99.95th=[45351], 00:33:49.311 | 99.99th=[45351] 00:33:49.311 bw ( KiB/s): min= 2176, max= 2412, per=4.15%, avg=2277.60, stdev=64.33, samples=20 00:33:49.311 iops : min= 544, max= 603, avg=569.40, stdev=16.08, samples=20 00:33:49.311 lat (msec) : 10=0.28%, 20=0.28%, 50=99.44% 00:33:49.311 cpu : usr=98.74%, sys=0.91%, ctx=14, majf=0, minf=9 00:33:49.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename2: (groupid=0, jobs=1): err= 0: pid=3745818: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10001msec) 00:33:49.311 slat (nsec): min=4253, max=75735, avg=31479.81, stdev=8817.73 00:33:49.311 clat (usec): min=16650, max=46110, avg=27805.94, stdev=1197.61 00:33:49.311 lat (usec): min=16674, max=46123, avg=27837.42, stdev=1196.73 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.311 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.311 | 99.00th=[28967], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:33:49.311 | 99.99th=[45876] 00:33:49.311 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.53, stdev=57.55, samples=19 00:33:49.311 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:49.311 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.311 cpu : usr=98.55%, sys=1.08%, ctx=14, majf=0, minf=9 00:33:49.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename2: (groupid=0, jobs=1): err= 0: pid=3745819: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10001msec) 00:33:49.311 slat (nsec): min=4170, max=59191, avg=30569.33, stdev=9215.11 00:33:49.311 clat (usec): min=16636, max=46175, avg=27839.86, stdev=1200.87 00:33:49.311 lat (usec): min=16676, max=46187, avg=27870.43, stdev=1199.35 00:33:49.311 clat percentiles (usec): 00:33:49.311 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:49.311 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:49.311 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.311 | 99.00th=[29230], 99.50th=[29230], 99.90th=[46400], 99.95th=[46400], 00:33:49.311 | 99.99th=[46400] 00:33:49.311 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=57.91, samples=19 00:33:49.311 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:49.311 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.311 cpu : usr=98.38%, sys=1.25%, ctx=67, majf=0, minf=9 00:33:49.311 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.311 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.311 filename2: (groupid=0, jobs=1): err= 0: pid=3745820: Wed Nov 20 10:50:48 2024 00:33:49.311 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:33:49.311 slat (nsec): min=7358, max=90312, avg=28616.04, stdev=11487.20 00:33:49.312 clat (usec): min=16920, max=30229, avg=27811.26, stdev=720.61 00:33:49.312 lat (usec): min=16928, max=30277, avg=27839.88, stdev=719.69 00:33:49.312 clat percentiles (usec): 00:33:49.312 | 1.00th=[26084], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:49.312 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:49.312 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:49.312 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29754], 00:33:49.312 | 99.99th=[30278] 00:33:49.312 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:49.312 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:49.312 lat (msec) : 20=0.28%, 50=99.72% 00:33:49.312 cpu : usr=98.45%, sys=1.18%, ctx=17, majf=0, minf=9 00:33:49.312 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.312 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.312 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.312 filename2: (groupid=0, jobs=1): err= 0: pid=3745821: Wed Nov 20 10:50:48 2024 00:33:49.312 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:49.312 slat (nsec): min=3217, max=93219, avg=35099.78, stdev=14346.96 00:33:49.312 clat (usec): min=16733, max=50701, avg=27816.15, stdev=1403.19 00:33:49.312 lat (usec): min=16761, max=50713, avg=27851.25, stdev=1401.37 00:33:49.312 clat percentiles (usec): 00:33:49.312 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:49.312 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:49.312 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.312 | 99.00th=[28967], 99.50th=[29230], 99.90th=[50594], 99.95th=[50594], 00:33:49.312 | 99.99th=[50594] 00:33:49.312 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2270.32, stdev=71.93, samples=19 00:33:49.312 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:33:49.312 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:33:49.312 cpu : usr=98.61%, sys=1.02%, ctx=14, majf=0, minf=9 00:33:49.312 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.312 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.312 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.312 filename2: (groupid=0, jobs=1): err= 0: pid=3745822: Wed Nov 20 10:50:48 2024 00:33:49.312 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:33:49.312 slat (nsec): min=7685, max=81117, avg=37204.05, stdev=18082.93 00:33:49.312 clat (usec): min=9993, max=30528, avg=27632.04, stdev=1460.30 00:33:49.312 lat (usec): min=10001, max=30553, avg=27669.24, stdev=1459.33 00:33:49.312 clat percentiles (usec): 00:33:49.312 | 1.00th=[19268], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:33:49.312 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:49.312 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:49.312 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30540], 99.95th=[30540], 00:33:49.312 | 99.99th=[30540] 00:33:49.312 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2290.53, stdev=58.73, samples=19 00:33:49.312 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:49.312 lat (msec) : 10=0.02%, 20=1.10%, 50=98.88% 00:33:49.312 cpu : usr=98.58%, sys=1.07%, ctx=15, majf=0, minf=9 00:33:49.312 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.312 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.312 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:49.312 00:33:49.312 Run status group 0 (all jobs): 00:33:49.312 READ: bw=53.6MiB/s (56.2MB/s), 2277KiB/s-2319KiB/s (2332kB/s-2374kB/s), io=537MiB (563MB), run=10001-10022msec 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 bdev_null0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 [2024-11-20 10:50:49.326759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.312 bdev_null1 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.312 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.313 { 00:33:49.313 "params": { 00:33:49.313 "name": "Nvme$subsystem", 00:33:49.313 "trtype": "$TEST_TRANSPORT", 00:33:49.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.313 "adrfam": "ipv4", 00:33:49.313 "trsvcid": "$NVMF_PORT", 00:33:49.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.313 "hdgst": ${hdgst:-false}, 00:33:49.313 "ddgst": ${ddgst:-false} 00:33:49.313 }, 00:33:49.313 "method": "bdev_nvme_attach_controller" 00:33:49.313 } 00:33:49.313 EOF 00:33:49.313 )") 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.313 { 00:33:49.313 "params": { 00:33:49.313 "name": "Nvme$subsystem", 00:33:49.313 "trtype": "$TEST_TRANSPORT", 00:33:49.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.313 "adrfam": "ipv4", 00:33:49.313 "trsvcid": "$NVMF_PORT", 00:33:49.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.313 "hdgst": ${hdgst:-false}, 00:33:49.313 "ddgst": ${ddgst:-false} 00:33:49.313 }, 00:33:49.313 "method": "bdev_nvme_attach_controller" 00:33:49.313 } 00:33:49.313 EOF 00:33:49.313 )") 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:49.313 "params": { 00:33:49.313 "name": "Nvme0", 00:33:49.313 "trtype": "tcp", 00:33:49.313 "traddr": "10.0.0.2", 00:33:49.313 "adrfam": "ipv4", 00:33:49.313 "trsvcid": "4420", 00:33:49.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.313 "hdgst": false, 00:33:49.313 "ddgst": false 00:33:49.313 }, 00:33:49.313 "method": "bdev_nvme_attach_controller" 00:33:49.313 },{ 00:33:49.313 "params": { 00:33:49.313 "name": "Nvme1", 00:33:49.313 "trtype": "tcp", 00:33:49.313 "traddr": "10.0.0.2", 00:33:49.313 "adrfam": "ipv4", 00:33:49.313 "trsvcid": "4420", 00:33:49.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:49.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:49.313 "hdgst": false, 00:33:49.313 "ddgst": false 00:33:49.313 }, 00:33:49.313 "method": "bdev_nvme_attach_controller" 00:33:49.313 }' 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:49.313 10:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.313 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:49.313 ... 00:33:49.313 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:49.313 ... 00:33:49.313 fio-3.35 00:33:49.313 Starting 4 threads 00:33:55.878 00:33:55.878 filename0: (groupid=0, jobs=1): err= 0: pid=3747783: Wed Nov 20 10:50:55 2024 00:33:55.879 read: IOPS=2814, BW=22.0MiB/s (23.1MB/s)(110MiB/5003msec) 00:33:55.879 slat (nsec): min=6108, max=46035, avg=8690.01, stdev=3020.00 00:33:55.879 clat (usec): min=837, max=5543, avg=2815.21, stdev=398.03 00:33:55.879 lat (usec): min=850, max=5550, avg=2823.90, stdev=397.69 00:33:55.879 clat percentiles (usec): 00:33:55.879 | 1.00th=[ 1729], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2540], 00:33:55.879 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2999], 00:33:55.879 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3195], 95.00th=[ 3359], 00:33:55.879 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[ 5014], 00:33:55.879 | 99.99th=[ 5538] 00:33:55.879 bw ( KiB/s): min=21408, max=23968, per=26.95%, avg=22521.60, stdev=895.75, samples=10 00:33:55.879 iops : min= 2676, max= 2996, avg=2815.20, stdev=111.97, samples=10 00:33:55.879 lat (usec) : 1000=0.13% 00:33:55.879 lat (msec) : 2=1.83%, 4=96.97%, 10=1.07% 00:33:55.879 cpu : usr=95.98%, sys=3.70%, ctx=8, majf=0, minf=9 00:33:55.879 IO depths : 1=0.3%, 2=6.5%, 4=65.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 issued rwts: total=14083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.879 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:55.879 filename0: (groupid=0, jobs=1): err= 0: pid=3747785: Wed Nov 20 10:50:55 2024 00:33:55.879 read: IOPS=2485, BW=19.4MiB/s (20.4MB/s)(97.2MiB/5007msec) 00:33:55.879 slat (nsec): min=6110, max=55300, avg=8512.30, stdev=3194.90 00:33:55.879 clat (usec): min=869, max=8816, avg=3191.90, stdev=447.85 00:33:55.879 lat (usec): min=881, max=8827, avg=3200.42, stdev=447.72 00:33:55.879 clat percentiles (usec): 00:33:55.879 | 1.00th=[ 2311], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 2966], 00:33:55.879 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:33:55.879 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4080], 00:33:55.879 | 99.00th=[ 4752], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 6128], 00:33:55.879 | 99.99th=[ 8848] 00:33:55.879 bw ( KiB/s): min=18816, max=20576, per=23.82%, avg=19905.60, stdev=628.95, samples=10 00:33:55.879 iops : min= 2352, max= 2572, avg=2488.20, stdev=78.62, samples=10 00:33:55.879 lat (usec) : 1000=0.02% 00:33:55.879 lat (msec) : 2=0.14%, 4=94.12%, 10=5.72% 00:33:55.879 cpu : usr=96.12%, sys=3.56%, ctx=10, majf=0, minf=9 00:33:55.879 IO depths : 1=0.1%, 2=1.5%, 4=71.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 issued rwts: total=12447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.879 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:55.879 filename1: (groupid=0, jobs=1): err= 0: pid=3747786: Wed Nov 20 10:50:55 2024 00:33:55.879 read: IOPS=2479, BW=19.4MiB/s (20.3MB/s)(97.0MiB/5006msec) 00:33:55.879 slat (nsec): min=6124, max=48832, avg=8314.85, stdev=2944.65 00:33:55.879 clat (usec): min=678, max=8556, avg=3200.70, stdev=447.19 00:33:55.879 lat (usec): min=690, max=8568, avg=3209.02, stdev=446.97 00:33:55.879 clat percentiles (usec): 00:33:55.879 | 1.00th=[ 2278], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2999], 00:33:55.879 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:55.879 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4047], 00:33:55.879 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 5997], 99.95th=[ 5997], 00:33:55.879 | 99.99th=[ 8586] 00:33:55.879 bw ( KiB/s): min=18432, max=20896, per=23.75%, avg=19846.40, stdev=652.38, samples=10 00:33:55.879 iops : min= 2304, max= 2612, avg=2480.80, stdev=81.55, samples=10 00:33:55.879 lat (usec) : 750=0.01% 00:33:55.879 lat (msec) : 2=0.22%, 4=94.42%, 10=5.35% 00:33:55.879 cpu : usr=96.00%, sys=3.68%, ctx=6, majf=0, minf=9 00:33:55.879 IO depths : 1=0.1%, 2=2.0%, 4=71.8%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 issued rwts: total=12412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.879 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:55.879 filename1: (groupid=0, jobs=1): err= 0: pid=3747787: Wed Nov 20 10:50:55 2024 00:33:55.879 read: IOPS=2668, BW=20.8MiB/s (21.9MB/s)(104MiB/5005msec) 00:33:55.879 slat (nsec): min=6107, max=44625, avg=8735.97, stdev=3100.12 00:33:55.879 clat (usec): min=1228, max=8134, avg=2973.25, stdev=432.04 00:33:55.879 lat (usec): min=1234, max=8146, avg=2981.98, stdev=431.85 00:33:55.879 clat percentiles (usec): 00:33:55.879 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2671], 00:33:55.879 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:33:55.879 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3425], 95.00th=[ 3720], 00:33:55.879 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 6194], 00:33:55.879 | 99.99th=[ 8160] 00:33:55.879 bw ( KiB/s): min=20640, max=22400, per=25.56%, avg=21357.80, stdev=464.46, samples=10 00:33:55.879 iops : min= 2580, max= 2800, avg=2669.70, stdev=58.07, samples=10 00:33:55.879 lat (msec) : 2=0.59%, 4=96.68%, 10=2.73% 00:33:55.879 cpu : usr=96.08%, sys=3.60%, ctx=7, majf=0, minf=9 00:33:55.879 IO depths : 1=0.1%, 2=3.8%, 4=66.3%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.879 issued rwts: total=13354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.879 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:55.879 00:33:55.879 Run status group 0 (all jobs): 00:33:55.879 READ: bw=81.6MiB/s (85.6MB/s), 19.4MiB/s-22.0MiB/s (20.3MB/s-23.1MB/s), io=409MiB (428MB), run=5003-5007msec 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.879 00:33:55.879 real 0m24.754s 00:33:55.879 user 4m51.973s 00:33:55.879 sys 0m5.180s 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.879 10:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:55.879 ************************************ 00:33:55.879 END TEST fio_dif_rand_params 00:33:55.879 ************************************ 00:33:55.879 10:50:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:55.879 10:50:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:55.879 10:50:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:55.879 10:50:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:55.879 ************************************ 00:33:55.879 START TEST fio_dif_digest 00:33:55.879 ************************************ 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:55.879 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.880 bdev_null0 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.880 [2024-11-20 10:50:55.960084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:55.880 { 00:33:55.880 "params": { 00:33:55.880 "name": "Nvme$subsystem", 00:33:55.880 "trtype": "$TEST_TRANSPORT", 00:33:55.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.880 "adrfam": "ipv4", 00:33:55.880 "trsvcid": "$NVMF_PORT", 00:33:55.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.880 "hdgst": ${hdgst:-false}, 00:33:55.880 "ddgst": ${ddgst:-false} 00:33:55.880 }, 00:33:55.880 "method": "bdev_nvme_attach_controller" 00:33:55.880 } 00:33:55.880 EOF 00:33:55.880 )") 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:55.880 10:50:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:55.880 "params": { 00:33:55.880 "name": "Nvme0", 00:33:55.880 "trtype": "tcp", 00:33:55.880 "traddr": "10.0.0.2", 00:33:55.880 "adrfam": "ipv4", 00:33:55.880 "trsvcid": "4420", 00:33:55.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:55.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:55.880 "hdgst": true, 00:33:55.880 "ddgst": true 00:33:55.880 }, 00:33:55.880 "method": "bdev_nvme_attach_controller" 00:33:55.880 }' 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:55.880 10:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.880 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:55.880 ... 00:33:55.880 fio-3.35 00:33:55.880 Starting 3 threads 00:34:08.094 00:34:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=3749036: Wed Nov 20 10:51:06 2024 00:34:08.094 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(367MiB/10045msec) 00:34:08.094 slat (nsec): min=6427, max=32870, avg=11191.47, stdev=1690.76 00:34:08.094 clat (usec): min=7965, max=51606, avg=10244.37, stdev=1825.75 00:34:08.094 lat (usec): min=7977, max=51630, avg=10255.56, stdev=1825.85 00:34:08.094 clat percentiles (usec): 00:34:08.094 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:34:08.094 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:34:08.094 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:34:08.094 | 99.00th=[12125], 99.50th=[12518], 99.90th=[51643], 99.95th=[51643], 00:34:08.094 | 99.99th=[51643] 00:34:08.094 bw ( KiB/s): min=32512, max=38656, per=35.39%, avg=37516.80, stdev=1296.00, samples=20 00:34:08.094 iops : min= 254, max= 302, avg=293.10, stdev=10.13, samples=20 00:34:08.094 lat (msec) : 10=40.08%, 20=59.75%, 50=0.03%, 100=0.14% 00:34:08.094 cpu : usr=94.48%, sys=5.20%, ctx=18, majf=0, minf=88 00:34:08.094 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.094 issued rwts: total=2934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=3749037: Wed Nov 20 10:51:06 2024 00:34:08.094 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10044msec) 00:34:08.094 slat (nsec): min=6406, max=55911, avg=11188.25, stdev=1850.15 00:34:08.094 clat (usec): min=6540, max=47638, avg=10996.53, stdev=1262.63 00:34:08.094 lat (usec): min=6548, max=47650, avg=11007.72, stdev=1262.64 00:34:08.094 clat percentiles (usec): 00:34:08.094 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:08.094 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:34:08.094 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:34:08.094 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14222], 99.95th=[45351], 00:34:08.094 | 99.99th=[47449] 00:34:08.094 bw ( KiB/s): min=34048, max=36352, per=32.97%, avg=34956.80, stdev=521.84, samples=20 00:34:08.094 iops : min= 266, max= 284, avg=273.10, stdev= 4.08, samples=20 00:34:08.094 lat (msec) : 10=9.29%, 20=90.63%, 50=0.07% 00:34:08.094 cpu : usr=94.73%, sys=4.95%, ctx=18, majf=0, minf=105 00:34:08.094 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.094 issued rwts: total=2733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.094 filename0: (groupid=0, jobs=1): err= 0: pid=3749038: Wed Nov 20 10:51:06 2024 00:34:08.094 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10044msec) 00:34:08.094 slat (nsec): min=6424, max=24743, avg=11162.03, stdev=1604.70 00:34:08.094 clat (usec): min=6527, max=47839, avg=11328.66, stdev=1268.28 00:34:08.094 lat (usec): min=6540, max=47855, avg=11339.82, stdev=1268.37 00:34:08.094 clat percentiles (usec): 00:34:08.094 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:34:08.094 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:34:08.094 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12256], 95.00th=[12649], 00:34:08.094 | 99.00th=[13435], 99.50th=[13698], 99.90th=[15401], 99.95th=[44827], 00:34:08.094 | 99.99th=[47973] 00:34:08.094 bw ( KiB/s): min=33024, max=35072, per=32.01%, avg=33932.80, stdev=515.19, samples=20 00:34:08.094 iops : min= 258, max= 274, avg=265.10, stdev= 4.02, samples=20 00:34:08.094 lat (msec) : 10=4.00%, 20=95.93%, 50=0.08% 00:34:08.094 cpu : usr=94.77%, sys=4.91%, ctx=21, majf=0, minf=57 00:34:08.094 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.094 issued rwts: total=2653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.094 00:34:08.094 Run status group 0 (all jobs): 00:34:08.094 READ: bw=104MiB/s (109MB/s), 33.0MiB/s-36.5MiB/s (34.6MB/s-38.3MB/s), io=1040MiB (1091MB), run=10044-10045msec 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.094 00:34:08.094 real 0m11.216s 00:34:08.094 user 0m35.082s 00:34:08.094 sys 0m1.850s 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.094 10:51:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.094 ************************************ 00:34:08.094 END TEST fio_dif_digest 00:34:08.094 ************************************ 00:34:08.094 10:51:07 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:08.094 10:51:07 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.094 rmmod nvme_tcp 00:34:08.094 rmmod nvme_fabrics 00:34:08.094 rmmod nvme_keyring 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3740445 ']' 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3740445 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3740445 ']' 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3740445 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3740445 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3740445' 00:34:08.094 killing process with pid 3740445 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3740445 00:34:08.094 10:51:07 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3740445 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:08.094 10:51:07 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:09.472 Waiting for block devices as requested 00:34:09.472 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:09.730 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:09.730 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:09.730 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:09.989 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:09.989 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:09.989 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:10.249 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:10.249 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:10.249 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:10.249 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:10.508 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:10.508 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:10.508 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:10.767 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:10.767 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:10.767 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:11.026 10:51:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.026 10:51:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:11.026 10:51:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.934 10:51:13 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.934 00:34:12.934 real 1m14.564s 00:34:12.934 user 7m9.256s 00:34:12.934 sys 0m20.621s 00:34:12.934 10:51:13 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.934 10:51:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.934 ************************************ 00:34:12.934 END TEST nvmf_dif 00:34:12.934 ************************************ 00:34:12.934 10:51:13 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:12.934 10:51:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:12.934 10:51:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.934 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:34:12.934 ************************************ 00:34:12.934 START TEST nvmf_abort_qd_sizes 00:34:12.934 ************************************ 00:34:12.934 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:13.193 * Looking for test storage... 00:34:13.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:13.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.193 --rc genhtml_branch_coverage=1 00:34:13.193 --rc genhtml_function_coverage=1 00:34:13.193 --rc genhtml_legend=1 00:34:13.193 --rc geninfo_all_blocks=1 00:34:13.193 --rc geninfo_unexecuted_blocks=1 00:34:13.193 00:34:13.193 ' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:13.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.193 --rc genhtml_branch_coverage=1 00:34:13.193 --rc genhtml_function_coverage=1 00:34:13.193 --rc genhtml_legend=1 00:34:13.193 --rc geninfo_all_blocks=1 00:34:13.193 --rc geninfo_unexecuted_blocks=1 00:34:13.193 00:34:13.193 ' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:13.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.193 --rc genhtml_branch_coverage=1 00:34:13.193 --rc genhtml_function_coverage=1 00:34:13.193 --rc genhtml_legend=1 00:34:13.193 --rc geninfo_all_blocks=1 00:34:13.193 --rc geninfo_unexecuted_blocks=1 00:34:13.193 00:34:13.193 ' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:13.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.193 --rc genhtml_branch_coverage=1 00:34:13.193 --rc genhtml_function_coverage=1 00:34:13.193 --rc genhtml_legend=1 00:34:13.193 --rc geninfo_all_blocks=1 00:34:13.193 --rc geninfo_unexecuted_blocks=1 00:34:13.193 00:34:13.193 ' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:13.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.193 10:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:19.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:19.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:19.752 Found net devices under 0000:86:00.0: cvl_0_0 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:19.752 Found net devices under 0000:86:00.1: cvl_0_1 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.752 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:34:19.753 00:34:19.753 --- 10.0.0.2 ping statistics --- 00:34:19.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.753 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:34:19.753 00:34:19.753 --- 10.0.0.1 ping statistics --- 00:34:19.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.753 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:19.753 10:51:19 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:22.288 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:22.288 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:22.857 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3757352 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3757352 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3757352 ']' 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.857 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:23.116 [2024-11-20 10:51:23.604391] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:34:23.116 [2024-11-20 10:51:23.604440] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.116 [2024-11-20 10:51:23.687687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:23.116 [2024-11-20 10:51:23.729394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.116 [2024-11-20 10:51:23.729437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.116 [2024-11-20 10:51:23.729444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.116 [2024-11-20 10:51:23.729449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.116 [2024-11-20 10:51:23.729455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.116 [2024-11-20 10:51:23.730888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.116 [2024-11-20 10:51:23.730999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.116 [2024-11-20 10:51:23.731037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.116 [2024-11-20 10:51:23.731038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.116 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.116 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:23.116 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.116 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.116 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.374 10:51:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:23.374 ************************************ 00:34:23.374 START TEST spdk_target_abort 00:34:23.374 ************************************ 00:34:23.374 10:51:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:23.374 10:51:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:23.374 10:51:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:23.374 10:51:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.374 10:51:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:26.654 spdk_targetn1 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:26.654 [2024-11-20 10:51:26.754069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:26.654 [2024-11-20 10:51:26.797176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:26.654 10:51:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.938 Initializing NVMe Controllers 00:34:29.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:29.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:29.938 Initialization complete. Launching workers. 00:34:29.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15939, failed: 0 00:34:29.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1340, failed to submit 14599 00:34:29.938 success 751, unsuccessful 589, failed 0 00:34:29.938 10:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:29.938 10:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.231 Initializing NVMe Controllers 00:34:33.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:33.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:33.231 Initialization complete. Launching workers. 00:34:33.231 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8603, failed: 0 00:34:33.231 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7344 00:34:33.231 success 319, unsuccessful 940, failed 0 00:34:33.231 10:51:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.231 10:51:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.559 Initializing NVMe Controllers 00:34:36.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.559 Initialization complete. Launching workers. 00:34:36.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37795, failed: 0 00:34:36.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2862, failed to submit 34933 00:34:36.559 success 578, unsuccessful 2284, failed 0 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.560 10:51:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3757352 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3757352 ']' 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3757352 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757352 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757352' 00:34:37.494 killing process with pid 3757352 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3757352 00:34:37.494 10:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3757352 00:34:37.494 00:34:37.494 real 0m14.211s 00:34:37.494 user 0m54.111s 00:34:37.494 sys 0m2.648s 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:37.494 ************************************ 00:34:37.494 END TEST spdk_target_abort 00:34:37.494 ************************************ 00:34:37.494 10:51:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:37.494 10:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:37.494 10:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.494 10:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:37.494 ************************************ 00:34:37.494 START TEST kernel_target_abort 00:34:37.494 ************************************ 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:37.494 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:37.753 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:37.753 10:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:40.288 Waiting for block devices as requested 00:34:40.288 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:40.548 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:40.548 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:40.548 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:40.808 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:40.808 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:40.808 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:41.068 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:41.068 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:41.068 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:41.068 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:41.327 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:41.327 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:41.327 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:41.587 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:41.587 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:41.587 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:41.847 No valid GPT data, bailing 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:41.847 00:34:41.847 Discovery Log Number of Records 2, Generation counter 2 00:34:41.847 =====Discovery Log Entry 0====== 00:34:41.847 trtype: tcp 00:34:41.847 adrfam: ipv4 00:34:41.847 subtype: current discovery subsystem 00:34:41.847 treq: not specified, sq flow control disable supported 00:34:41.847 portid: 1 00:34:41.847 trsvcid: 4420 00:34:41.847 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:41.847 traddr: 10.0.0.1 00:34:41.847 eflags: none 00:34:41.847 sectype: none 00:34:41.847 =====Discovery Log Entry 1====== 00:34:41.847 trtype: tcp 00:34:41.847 adrfam: ipv4 00:34:41.847 subtype: nvme subsystem 00:34:41.847 treq: not specified, sq flow control disable supported 00:34:41.847 portid: 1 00:34:41.847 trsvcid: 4420 00:34:41.847 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:41.847 traddr: 10.0.0.1 00:34:41.847 eflags: none 00:34:41.847 sectype: none 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:41.847 10:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.134 Initializing NVMe Controllers 00:34:45.134 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.134 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.134 Initialization complete. Launching workers. 00:34:45.134 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93246, failed: 0 00:34:45.134 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93246, failed to submit 0 00:34:45.134 success 0, unsuccessful 93246, failed 0 00:34:45.134 10:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.134 10:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.418 Initializing NVMe Controllers 00:34:48.418 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.418 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.418 Initialization complete. Launching workers. 00:34:48.418 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144178, failed: 0 00:34:48.418 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36082, failed to submit 108096 00:34:48.418 success 0, unsuccessful 36082, failed 0 00:34:48.418 10:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.418 10:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:51.701 Initializing NVMe Controllers 00:34:51.701 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:51.701 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:51.701 Initialization complete. Launching workers. 00:34:51.701 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134278, failed: 0 00:34:51.701 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33614, failed to submit 100664 00:34:51.701 success 0, unsuccessful 33614, failed 0 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:51.701 10:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:54.235 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:54.235 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:54.235 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:54.235 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:54.235 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:54.236 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.207 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:55.207 00:34:55.207 real 0m17.514s 00:34:55.207 user 0m9.221s 00:34:55.207 sys 0m4.981s 00:34:55.207 10:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.207 10:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:55.207 ************************************ 00:34:55.207 END TEST kernel_target_abort 00:34:55.207 ************************************ 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:55.207 rmmod nvme_tcp 00:34:55.207 rmmod nvme_fabrics 00:34:55.207 rmmod nvme_keyring 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3757352 ']' 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3757352 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3757352 ']' 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3757352 00:34:55.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3757352) - No such process 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3757352 is not found' 00:34:55.207 Process with pid 3757352 is not found 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:55.207 10:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:58.498 Waiting for block devices as requested 00:34:58.498 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:58.498 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:58.498 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:58.756 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:58.756 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:58.756 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:59.015 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:59.015 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:59.015 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:59.015 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:59.273 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.274 10:51:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.809 10:52:01 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:01.809 00:35:01.809 real 0m48.308s 00:35:01.809 user 1m7.600s 00:35:01.809 sys 0m16.441s 00:35:01.809 10:52:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.809 10:52:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.809 ************************************ 00:35:01.809 END TEST nvmf_abort_qd_sizes 00:35:01.809 ************************************ 00:35:01.809 10:52:01 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:01.809 10:52:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:01.809 10:52:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.809 10:52:01 -- common/autotest_common.sh@10 -- # set +x 00:35:01.809 ************************************ 00:35:01.809 START TEST keyring_file 00:35:01.809 ************************************ 00:35:01.809 10:52:02 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:01.809 * Looking for test storage... 00:35:01.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:01.809 10:52:02 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:01.809 10:52:02 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:01.809 10:52:02 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:01.809 10:52:02 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:01.809 10:52:02 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:01.810 10:52:02 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:01.810 10:52:02 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.810 --rc genhtml_branch_coverage=1 00:35:01.810 --rc genhtml_function_coverage=1 00:35:01.810 --rc genhtml_legend=1 00:35:01.810 --rc geninfo_all_blocks=1 00:35:01.810 --rc geninfo_unexecuted_blocks=1 00:35:01.810 00:35:01.810 ' 00:35:01.810 10:52:02 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.810 --rc genhtml_branch_coverage=1 00:35:01.810 --rc genhtml_function_coverage=1 00:35:01.810 --rc genhtml_legend=1 00:35:01.810 --rc geninfo_all_blocks=1 00:35:01.810 --rc geninfo_unexecuted_blocks=1 00:35:01.810 00:35:01.810 ' 00:35:01.810 10:52:02 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.810 --rc genhtml_branch_coverage=1 00:35:01.810 --rc genhtml_function_coverage=1 00:35:01.810 --rc genhtml_legend=1 00:35:01.810 --rc geninfo_all_blocks=1 00:35:01.810 --rc geninfo_unexecuted_blocks=1 00:35:01.810 00:35:01.810 ' 00:35:01.810 10:52:02 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.810 --rc genhtml_branch_coverage=1 00:35:01.810 --rc genhtml_function_coverage=1 00:35:01.810 --rc genhtml_legend=1 00:35:01.810 --rc geninfo_all_blocks=1 00:35:01.810 --rc geninfo_unexecuted_blocks=1 00:35:01.810 00:35:01.810 ' 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.810 10:52:02 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.810 10:52:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.810 10:52:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.810 10:52:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.810 10:52:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:01.810 10:52:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:01.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PQRDthXXUT 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PQRDthXXUT 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PQRDthXXUT 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.PQRDthXXUT 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.piiLGFahFq 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:01.810 10:52:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.piiLGFahFq 00:35:01.810 10:52:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.piiLGFahFq 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.piiLGFahFq 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=3766143 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:01.810 10:52:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3766143 00:35:01.810 10:52:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3766143 ']' 00:35:01.811 10:52:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.811 10:52:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.811 10:52:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.811 10:52:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.811 10:52:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:01.811 [2024-11-20 10:52:02.392641] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:35:01.811 [2024-11-20 10:52:02.392690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766143 ] 00:35:01.811 [2024-11-20 10:52:02.468503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.811 [2024-11-20 10:52:02.510835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:02.069 10:52:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:02.069 [2024-11-20 10:52:02.733876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.069 null0 00:35:02.069 [2024-11-20 10:52:02.765932] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:02.069 [2024-11-20 10:52:02.766311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.069 10:52:02 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:02.069 10:52:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.070 10:52:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:02.070 [2024-11-20 10:52:02.793998] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:02.070 request: 00:35:02.070 { 00:35:02.070 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.070 "secure_channel": false, 00:35:02.070 "listen_address": { 00:35:02.328 "trtype": "tcp", 00:35:02.328 "traddr": "127.0.0.1", 00:35:02.328 "trsvcid": "4420" 00:35:02.328 }, 00:35:02.328 "method": "nvmf_subsystem_add_listener", 00:35:02.328 "req_id": 1 00:35:02.328 } 00:35:02.328 Got JSON-RPC error response 00:35:02.328 response: 00:35:02.328 { 00:35:02.328 "code": -32602, 00:35:02.328 "message": "Invalid parameters" 00:35:02.328 } 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:02.328 10:52:02 keyring_file -- keyring/file.sh@47 -- # bperfpid=3766156 00:35:02.328 10:52:02 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3766156 /var/tmp/bperf.sock 00:35:02.328 10:52:02 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3766156 ']' 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.328 10:52:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:02.328 [2024-11-20 10:52:02.849783] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:35:02.328 [2024-11-20 10:52:02.849828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766156 ] 00:35:02.328 [2024-11-20 10:52:02.925008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.328 [2024-11-20 10:52:02.968492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.587 10:52:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.587 10:52:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:02.587 10:52:03 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:02.587 10:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:02.587 10:52:03 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.piiLGFahFq 00:35:02.587 10:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.piiLGFahFq 00:35:02.845 10:52:03 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:02.845 10:52:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:02.845 10:52:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.845 10:52:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:02.845 10:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.104 10:52:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PQRDthXXUT == \/\t\m\p\/\t\m\p\.\P\Q\R\D\t\h\X\X\U\T ]] 00:35:03.104 10:52:03 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:03.104 10:52:03 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:03.104 10:52:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.104 10:52:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:03.104 10:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.362 10:52:03 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.piiLGFahFq == \/\t\m\p\/\t\m\p\.\p\i\i\L\G\F\a\h\F\q ]] 00:35:03.362 10:52:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:03.362 10:52:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.362 10:52:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.362 10:52:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.362 10:52:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.362 10:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.362 10:52:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:03.362 10:52:04 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:03.362 10:52:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:03.362 10:52:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.362 10:52:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.362 10:52:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:03.362 10:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.621 10:52:04 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:03.621 10:52:04 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:03.621 10:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:03.879 [2024-11-20 10:52:04.416033] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:03.879 nvme0n1 00:35:03.879 10:52:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:03.879 10:52:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.879 10:52:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.879 10:52:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.879 10:52:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.879 10:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.136 10:52:04 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:04.136 10:52:04 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:04.136 10:52:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:04.136 10:52:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.136 10:52:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.136 10:52:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:04.136 10:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.395 10:52:04 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:04.395 10:52:04 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.395 Running I/O for 1 seconds... 00:35:05.331 18785.00 IOPS, 73.38 MiB/s 00:35:05.331 Latency(us) 00:35:05.331 [2024-11-20T09:52:06.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.331 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:05.331 nvme0n1 : 1.00 18829.98 73.55 0.00 0.00 6785.72 4388.06 16526.47 00:35:05.331 [2024-11-20T09:52:06.062Z] =================================================================================================================== 00:35:05.331 [2024-11-20T09:52:06.062Z] Total : 18829.98 73.55 0.00 0.00 6785.72 4388.06 16526.47 00:35:05.331 { 00:35:05.331 "results": [ 00:35:05.331 { 00:35:05.331 "job": "nvme0n1", 00:35:05.331 "core_mask": "0x2", 00:35:05.331 "workload": "randrw", 00:35:05.331 "percentage": 50, 00:35:05.331 "status": "finished", 00:35:05.331 "queue_depth": 128, 00:35:05.331 "io_size": 4096, 00:35:05.331 "runtime": 1.004409, 00:35:05.331 "iops": 18829.9786242457, 00:35:05.331 "mibps": 73.55460400095977, 00:35:05.331 "io_failed": 0, 00:35:05.331 "io_timeout": 0, 00:35:05.331 "avg_latency_us": 6785.71774629367, 00:35:05.331 "min_latency_us": 4388.062608695652, 00:35:05.331 "max_latency_us": 16526.46956521739 00:35:05.331 } 00:35:05.331 ], 00:35:05.331 "core_count": 1 00:35:05.331 } 00:35:05.331 10:52:06 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:05.331 10:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:05.589 10:52:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:05.589 10:52:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:05.589 10:52:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.589 10:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.589 10:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:05.589 10:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.847 10:52:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:05.847 10:52:06 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:05.847 10:52:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:05.847 10:52:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.847 10:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.847 10:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.847 10:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.106 10:52:06 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:06.106 10:52:06 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.106 10:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.106 [2024-11-20 10:52:06.807955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:06.106 [2024-11-20 10:52:06.808049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87ff0 (107): Transport endpoint is not connected 00:35:06.106 [2024-11-20 10:52:06.809043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87ff0 (9): Bad file descriptor 00:35:06.106 [2024-11-20 10:52:06.810044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:06.106 [2024-11-20 10:52:06.810054] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:06.106 [2024-11-20 10:52:06.810061] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:06.106 [2024-11-20 10:52:06.810070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:06.106 request: 00:35:06.106 { 00:35:06.106 "name": "nvme0", 00:35:06.106 "trtype": "tcp", 00:35:06.106 "traddr": "127.0.0.1", 00:35:06.106 "adrfam": "ipv4", 00:35:06.106 "trsvcid": "4420", 00:35:06.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:06.106 "prchk_reftag": false, 00:35:06.106 "prchk_guard": false, 00:35:06.106 "hdgst": false, 00:35:06.106 "ddgst": false, 00:35:06.106 "psk": "key1", 00:35:06.106 "allow_unrecognized_csi": false, 00:35:06.106 "method": "bdev_nvme_attach_controller", 00:35:06.106 "req_id": 1 00:35:06.106 } 00:35:06.106 Got JSON-RPC error response 00:35:06.106 response: 00:35:06.106 { 00:35:06.106 "code": -5, 00:35:06.106 "message": "Input/output error" 00:35:06.106 } 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.106 10:52:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.106 10:52:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:06.106 10:52:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.106 10:52:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.106 10:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.106 10:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.106 10:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.366 10:52:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:06.366 10:52:07 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:06.366 10:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.366 10:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.366 10:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.366 10:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.366 10:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.625 10:52:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:06.625 10:52:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:06.625 10:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:06.882 10:52:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:06.882 10:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:07.140 10:52:07 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:07.140 10:52:07 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:07.140 10:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.140 10:52:07 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:07.140 10:52:07 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.PQRDthXXUT 00:35:07.140 10:52:07 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.140 10:52:07 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:07.140 10:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:07.398 [2024-11-20 10:52:08.022184] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PQRDthXXUT': 0100660 00:35:07.398 [2024-11-20 10:52:08.022214] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:07.398 request: 00:35:07.398 { 00:35:07.398 "name": "key0", 00:35:07.398 "path": "/tmp/tmp.PQRDthXXUT", 00:35:07.398 "method": "keyring_file_add_key", 00:35:07.399 "req_id": 1 00:35:07.399 } 00:35:07.399 Got JSON-RPC error response 00:35:07.399 response: 00:35:07.399 { 00:35:07.399 "code": -1, 00:35:07.399 "message": "Operation not permitted" 00:35:07.399 } 00:35:07.399 10:52:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:07.399 10:52:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:07.399 10:52:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:07.399 10:52:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:07.399 10:52:08 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.PQRDthXXUT 00:35:07.399 10:52:08 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:07.399 10:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PQRDthXXUT 00:35:07.657 10:52:08 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.PQRDthXXUT 00:35:07.657 10:52:08 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:07.657 10:52:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.657 10:52:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.657 10:52:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.657 10:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.657 10:52:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.915 10:52:08 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:07.915 10:52:08 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.915 10:52:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.915 10:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.174 [2024-11-20 10:52:08.647861] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.PQRDthXXUT': No such file or directory 00:35:08.174 [2024-11-20 10:52:08.647891] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:08.174 [2024-11-20 10:52:08.647908] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:08.174 [2024-11-20 10:52:08.647915] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:08.174 [2024-11-20 10:52:08.647922] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:08.174 [2024-11-20 10:52:08.647928] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:08.174 request: 00:35:08.174 { 00:35:08.174 "name": "nvme0", 00:35:08.174 "trtype": "tcp", 00:35:08.174 "traddr": "127.0.0.1", 00:35:08.174 "adrfam": "ipv4", 00:35:08.174 "trsvcid": "4420", 00:35:08.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.174 "prchk_reftag": false, 00:35:08.174 "prchk_guard": false, 00:35:08.174 "hdgst": false, 00:35:08.174 "ddgst": false, 00:35:08.174 "psk": "key0", 00:35:08.174 "allow_unrecognized_csi": false, 00:35:08.174 "method": "bdev_nvme_attach_controller", 00:35:08.174 "req_id": 1 00:35:08.174 } 00:35:08.174 Got JSON-RPC error response 00:35:08.174 response: 00:35:08.174 { 00:35:08.174 "code": -19, 00:35:08.174 "message": "No such device" 00:35:08.174 } 00:35:08.174 10:52:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:08.174 10:52:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:08.174 10:52:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:08.174 10:52:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:08.174 10:52:08 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:08.174 10:52:08 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wxuvRra7b7 00:35:08.174 10:52:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:08.174 10:52:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:08.174 10:52:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:08.174 10:52:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:08.174 10:52:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:08.175 10:52:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:08.175 10:52:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:08.175 10:52:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wxuvRra7b7 00:35:08.175 10:52:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wxuvRra7b7 00:35:08.175 10:52:08 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.wxuvRra7b7 00:35:08.175 10:52:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wxuvRra7b7 00:35:08.175 10:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wxuvRra7b7 00:35:08.433 10:52:09 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.434 10:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.691 nvme0n1 00:35:08.691 10:52:09 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:08.691 10:52:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.691 10:52:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.691 10:52:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.691 10:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.691 10:52:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.948 10:52:09 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:08.948 10:52:09 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:08.948 10:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:09.206 10:52:09 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:09.206 10:52:09 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:09.206 10:52:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.206 10:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.206 10:52:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.464 10:52:09 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:09.464 10:52:09 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:09.464 10:52:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.464 10:52:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.464 10:52:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.464 10:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.464 10:52:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.464 10:52:10 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:09.464 10:52:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:09.464 10:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:09.722 10:52:10 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:09.722 10:52:10 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:09.722 10:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.980 10:52:10 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:09.980 10:52:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wxuvRra7b7 00:35:09.980 10:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wxuvRra7b7 00:35:10.238 10:52:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.piiLGFahFq 00:35:10.238 10:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.piiLGFahFq 00:35:10.238 10:52:10 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.238 10:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.496 nvme0n1 00:35:10.496 10:52:11 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:10.496 10:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:10.754 10:52:11 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:10.754 "subsystems": [ 00:35:10.754 { 00:35:10.754 "subsystem": "keyring", 00:35:10.754 "config": [ 00:35:10.754 { 00:35:10.754 "method": "keyring_file_add_key", 00:35:10.754 "params": { 00:35:10.754 "name": "key0", 00:35:10.754 "path": "/tmp/tmp.wxuvRra7b7" 00:35:10.754 } 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "method": "keyring_file_add_key", 00:35:10.754 "params": { 00:35:10.754 "name": "key1", 00:35:10.754 "path": "/tmp/tmp.piiLGFahFq" 00:35:10.754 } 00:35:10.754 } 00:35:10.754 ] 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "subsystem": "iobuf", 00:35:10.754 "config": [ 00:35:10.754 { 00:35:10.754 "method": "iobuf_set_options", 00:35:10.754 "params": { 00:35:10.754 "small_pool_count": 8192, 00:35:10.754 "large_pool_count": 1024, 00:35:10.754 "small_bufsize": 8192, 00:35:10.754 "large_bufsize": 135168, 00:35:10.754 "enable_numa": false 00:35:10.754 } 00:35:10.754 } 00:35:10.754 ] 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "subsystem": "sock", 00:35:10.754 "config": [ 00:35:10.754 { 00:35:10.754 "method": "sock_set_default_impl", 00:35:10.754 "params": { 00:35:10.754 "impl_name": "posix" 00:35:10.754 } 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "method": "sock_impl_set_options", 00:35:10.754 "params": { 00:35:10.754 "impl_name": "ssl", 00:35:10.754 "recv_buf_size": 4096, 00:35:10.754 "send_buf_size": 4096, 00:35:10.754 "enable_recv_pipe": true, 00:35:10.754 "enable_quickack": false, 00:35:10.754 "enable_placement_id": 0, 00:35:10.754 "enable_zerocopy_send_server": true, 00:35:10.754 "enable_zerocopy_send_client": false, 00:35:10.754 "zerocopy_threshold": 0, 00:35:10.754 "tls_version": 0, 00:35:10.754 "enable_ktls": false 00:35:10.754 } 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "method": "sock_impl_set_options", 00:35:10.754 "params": { 00:35:10.754 "impl_name": "posix", 00:35:10.754 "recv_buf_size": 2097152, 00:35:10.754 "send_buf_size": 2097152, 00:35:10.754 "enable_recv_pipe": true, 00:35:10.754 "enable_quickack": false, 00:35:10.754 "enable_placement_id": 0, 00:35:10.754 "enable_zerocopy_send_server": true, 00:35:10.754 "enable_zerocopy_send_client": false, 00:35:10.754 "zerocopy_threshold": 0, 00:35:10.754 "tls_version": 0, 00:35:10.754 "enable_ktls": false 00:35:10.754 } 00:35:10.754 } 00:35:10.754 ] 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "subsystem": "vmd", 00:35:10.754 "config": [] 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "subsystem": "accel", 00:35:10.754 "config": [ 00:35:10.754 { 00:35:10.754 "method": "accel_set_options", 00:35:10.754 "params": { 00:35:10.754 "small_cache_size": 128, 00:35:10.754 "large_cache_size": 16, 00:35:10.754 "task_count": 2048, 00:35:10.754 "sequence_count": 2048, 00:35:10.754 "buf_count": 2048 00:35:10.754 } 00:35:10.754 } 00:35:10.754 ] 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "subsystem": "bdev", 00:35:10.754 "config": [ 00:35:10.754 { 00:35:10.754 "method": "bdev_set_options", 00:35:10.754 "params": { 00:35:10.754 "bdev_io_pool_size": 65535, 00:35:10.754 "bdev_io_cache_size": 256, 00:35:10.754 "bdev_auto_examine": true, 00:35:10.754 "iobuf_small_cache_size": 128, 00:35:10.754 "iobuf_large_cache_size": 16 00:35:10.754 } 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "method": "bdev_raid_set_options", 00:35:10.754 "params": { 00:35:10.754 "process_window_size_kb": 1024, 00:35:10.754 "process_max_bandwidth_mb_sec": 0 00:35:10.754 } 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "method": "bdev_iscsi_set_options", 00:35:10.754 "params": { 00:35:10.754 "timeout_sec": 30 00:35:10.754 } 00:35:10.754 }, 00:35:10.754 { 00:35:10.754 "method": "bdev_nvme_set_options", 00:35:10.754 "params": { 00:35:10.754 "action_on_timeout": "none", 00:35:10.754 "timeout_us": 0, 00:35:10.754 "timeout_admin_us": 0, 00:35:10.754 "keep_alive_timeout_ms": 10000, 00:35:10.754 "arbitration_burst": 0, 00:35:10.754 "low_priority_weight": 0, 00:35:10.754 "medium_priority_weight": 0, 00:35:10.754 "high_priority_weight": 0, 00:35:10.754 "nvme_adminq_poll_period_us": 10000, 00:35:10.754 "nvme_ioq_poll_period_us": 0, 00:35:10.754 "io_queue_requests": 512, 00:35:10.754 "delay_cmd_submit": true, 00:35:10.754 "transport_retry_count": 4, 00:35:10.754 "bdev_retry_count": 3, 00:35:10.754 "transport_ack_timeout": 0, 00:35:10.754 "ctrlr_loss_timeout_sec": 0, 00:35:10.755 "reconnect_delay_sec": 0, 00:35:10.755 "fast_io_fail_timeout_sec": 0, 00:35:10.755 "disable_auto_failback": false, 00:35:10.755 "generate_uuids": false, 00:35:10.755 "transport_tos": 0, 00:35:10.755 "nvme_error_stat": false, 00:35:10.755 "rdma_srq_size": 0, 00:35:10.755 "io_path_stat": false, 00:35:10.755 "allow_accel_sequence": false, 00:35:10.755 "rdma_max_cq_size": 0, 00:35:10.755 "rdma_cm_event_timeout_ms": 0, 00:35:10.755 "dhchap_digests": [ 00:35:10.755 "sha256", 00:35:10.755 "sha384", 00:35:10.755 "sha512" 00:35:10.755 ], 00:35:10.755 "dhchap_dhgroups": [ 00:35:10.755 "null", 00:35:10.755 "ffdhe2048", 00:35:10.755 "ffdhe3072", 00:35:10.755 "ffdhe4096", 00:35:10.755 "ffdhe6144", 00:35:10.755 "ffdhe8192" 00:35:10.755 ] 00:35:10.755 } 00:35:10.755 }, 00:35:10.755 { 00:35:10.755 "method": "bdev_nvme_attach_controller", 00:35:10.755 "params": { 00:35:10.755 "name": "nvme0", 00:35:10.755 "trtype": "TCP", 00:35:10.755 "adrfam": "IPv4", 00:35:10.755 "traddr": "127.0.0.1", 00:35:10.755 "trsvcid": "4420", 00:35:10.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.755 "prchk_reftag": false, 00:35:10.755 "prchk_guard": false, 00:35:10.755 "ctrlr_loss_timeout_sec": 0, 00:35:10.755 "reconnect_delay_sec": 0, 00:35:10.755 "fast_io_fail_timeout_sec": 0, 00:35:10.755 "psk": "key0", 00:35:10.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.755 "hdgst": false, 00:35:10.755 "ddgst": false, 00:35:10.755 "multipath": "multipath" 00:35:10.755 } 00:35:10.755 }, 00:35:10.755 { 00:35:10.755 "method": "bdev_nvme_set_hotplug", 00:35:10.755 "params": { 00:35:10.755 "period_us": 100000, 00:35:10.755 "enable": false 00:35:10.755 } 00:35:10.755 }, 00:35:10.755 { 00:35:10.755 "method": "bdev_wait_for_examine" 00:35:10.755 } 00:35:10.755 ] 00:35:10.755 }, 00:35:10.755 { 00:35:10.755 "subsystem": "nbd", 00:35:10.755 "config": [] 00:35:10.755 } 00:35:10.755 ] 00:35:10.755 }' 00:35:10.755 10:52:11 keyring_file -- keyring/file.sh@115 -- # killprocess 3766156 00:35:10.755 10:52:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3766156 ']' 00:35:10.755 10:52:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3766156 00:35:10.755 10:52:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:10.755 10:52:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.755 10:52:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3766156 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3766156' 00:35:11.014 killing process with pid 3766156 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@973 -- # kill 3766156 00:35:11.014 Received shutdown signal, test time was about 1.000000 seconds 00:35:11.014 00:35:11.014 Latency(us) 00:35:11.014 [2024-11-20T09:52:11.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.014 [2024-11-20T09:52:11.745Z] =================================================================================================================== 00:35:11.014 [2024-11-20T09:52:11.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@978 -- # wait 3766156 00:35:11.014 10:52:11 keyring_file -- keyring/file.sh@118 -- # bperfpid=3767670 00:35:11.014 10:52:11 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3767670 /var/tmp/bperf.sock 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3767670 ']' 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.014 10:52:11 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.014 10:52:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.014 10:52:11 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:11.014 "subsystems": [ 00:35:11.014 { 00:35:11.014 "subsystem": "keyring", 00:35:11.014 "config": [ 00:35:11.014 { 00:35:11.014 "method": "keyring_file_add_key", 00:35:11.014 "params": { 00:35:11.014 "name": "key0", 00:35:11.014 "path": "/tmp/tmp.wxuvRra7b7" 00:35:11.014 } 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "method": "keyring_file_add_key", 00:35:11.014 "params": { 00:35:11.014 "name": "key1", 00:35:11.014 "path": "/tmp/tmp.piiLGFahFq" 00:35:11.014 } 00:35:11.014 } 00:35:11.014 ] 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "subsystem": "iobuf", 00:35:11.014 "config": [ 00:35:11.014 { 00:35:11.014 "method": "iobuf_set_options", 00:35:11.014 "params": { 00:35:11.014 "small_pool_count": 8192, 00:35:11.014 "large_pool_count": 1024, 00:35:11.014 "small_bufsize": 8192, 00:35:11.014 "large_bufsize": 135168, 00:35:11.014 "enable_numa": false 00:35:11.014 } 00:35:11.014 } 00:35:11.014 ] 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "subsystem": "sock", 00:35:11.014 "config": [ 00:35:11.014 { 00:35:11.014 "method": "sock_set_default_impl", 00:35:11.014 "params": { 00:35:11.014 "impl_name": "posix" 00:35:11.014 } 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "method": "sock_impl_set_options", 00:35:11.014 "params": { 00:35:11.014 "impl_name": "ssl", 00:35:11.014 "recv_buf_size": 4096, 00:35:11.014 "send_buf_size": 4096, 00:35:11.014 "enable_recv_pipe": true, 00:35:11.014 "enable_quickack": false, 00:35:11.014 "enable_placement_id": 0, 00:35:11.014 "enable_zerocopy_send_server": true, 00:35:11.014 "enable_zerocopy_send_client": false, 00:35:11.014 "zerocopy_threshold": 0, 00:35:11.014 "tls_version": 0, 00:35:11.014 "enable_ktls": false 00:35:11.014 } 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "method": "sock_impl_set_options", 00:35:11.014 "params": { 00:35:11.014 "impl_name": "posix", 00:35:11.014 "recv_buf_size": 2097152, 00:35:11.014 "send_buf_size": 2097152, 00:35:11.014 "enable_recv_pipe": true, 00:35:11.014 "enable_quickack": false, 00:35:11.014 "enable_placement_id": 0, 00:35:11.014 "enable_zerocopy_send_server": true, 00:35:11.014 "enable_zerocopy_send_client": false, 00:35:11.014 "zerocopy_threshold": 0, 00:35:11.014 "tls_version": 0, 00:35:11.014 "enable_ktls": false 00:35:11.014 } 00:35:11.014 } 00:35:11.014 ] 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "subsystem": "vmd", 00:35:11.014 "config": [] 00:35:11.014 }, 00:35:11.014 { 00:35:11.014 "subsystem": "accel", 00:35:11.014 "config": [ 00:35:11.014 { 00:35:11.014 "method": "accel_set_options", 00:35:11.014 "params": { 00:35:11.015 "small_cache_size": 128, 00:35:11.015 "large_cache_size": 16, 00:35:11.015 "task_count": 2048, 00:35:11.015 "sequence_count": 2048, 00:35:11.015 "buf_count": 2048 00:35:11.015 } 00:35:11.015 } 00:35:11.015 ] 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "subsystem": "bdev", 00:35:11.015 "config": [ 00:35:11.015 { 00:35:11.015 "method": "bdev_set_options", 00:35:11.015 "params": { 00:35:11.015 "bdev_io_pool_size": 65535, 00:35:11.015 "bdev_io_cache_size": 256, 00:35:11.015 "bdev_auto_examine": true, 00:35:11.015 "iobuf_small_cache_size": 128, 00:35:11.015 "iobuf_large_cache_size": 16 00:35:11.015 } 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "method": "bdev_raid_set_options", 00:35:11.015 "params": { 00:35:11.015 "process_window_size_kb": 1024, 00:35:11.015 "process_max_bandwidth_mb_sec": 0 00:35:11.015 } 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "method": "bdev_iscsi_set_options", 00:35:11.015 "params": { 00:35:11.015 "timeout_sec": 30 00:35:11.015 } 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "method": "bdev_nvme_set_options", 00:35:11.015 "params": { 00:35:11.015 "action_on_timeout": "none", 00:35:11.015 "timeout_us": 0, 00:35:11.015 "timeout_admin_us": 0, 00:35:11.015 "keep_alive_timeout_ms": 10000, 00:35:11.015 "arbitration_burst": 0, 00:35:11.015 "low_priority_weight": 0, 00:35:11.015 "medium_priority_weight": 0, 00:35:11.015 "high_priority_weight": 0, 00:35:11.015 "nvme_adminq_poll_period_us": 10000, 00:35:11.015 "nvme_ioq_poll_period_us": 0, 00:35:11.015 "io_queue_requests": 512, 00:35:11.015 "delay_cmd_submit": true, 00:35:11.015 "transport_retry_count": 4, 00:35:11.015 "bdev_retry_count": 3, 00:35:11.015 "transport_ack_timeout": 0, 00:35:11.015 "ctrlr_loss_timeout_sec": 0, 00:35:11.015 "reconnect_delay_sec": 0, 00:35:11.015 "fast_io_fail_timeout_sec": 0, 00:35:11.015 "disable_auto_failback": false, 00:35:11.015 "generate_uuids": false, 00:35:11.015 "transport_tos": 0, 00:35:11.015 "nvme_error_stat": false, 00:35:11.015 "rdma_srq_size": 0, 00:35:11.015 "io_path_stat": false, 00:35:11.015 "allow_accel_sequence": false, 00:35:11.015 "rdma_max_cq_size": 0, 00:35:11.015 "rdma_cm_event_timeout_ms": 0, 00:35:11.015 "dhchap_digests": [ 00:35:11.015 "sha256", 00:35:11.015 "sha384", 00:35:11.015 "sha512" 00:35:11.015 ], 00:35:11.015 "dhchap_dhgroups": [ 00:35:11.015 "null", 00:35:11.015 "ffdhe2048", 00:35:11.015 "ffdhe3072", 00:35:11.015 "ffdhe4096", 00:35:11.015 "ffdhe6144", 00:35:11.015 "ffdhe8192" 00:35:11.015 ] 00:35:11.015 } 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "method": "bdev_nvme_attach_controller", 00:35:11.015 "params": { 00:35:11.015 "name": "nvme0", 00:35:11.015 "trtype": "TCP", 00:35:11.015 "adrfam": "IPv4", 00:35:11.015 "traddr": "127.0.0.1", 00:35:11.015 "trsvcid": "4420", 00:35:11.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.015 "prchk_reftag": false, 00:35:11.015 "prchk_guard": false, 00:35:11.015 "ctrlr_loss_timeout_sec": 0, 00:35:11.015 "reconnect_delay_sec": 0, 00:35:11.015 "fast_io_fail_timeout_sec": 0, 00:35:11.015 "psk": "key0", 00:35:11.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.015 "hdgst": false, 00:35:11.015 "ddgst": false, 00:35:11.015 "multipath": "multipath" 00:35:11.015 } 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "method": "bdev_nvme_set_hotplug", 00:35:11.015 "params": { 00:35:11.015 "period_us": 100000, 00:35:11.015 "enable": false 00:35:11.015 } 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "method": "bdev_wait_for_examine" 00:35:11.015 } 00:35:11.015 ] 00:35:11.015 }, 00:35:11.015 { 00:35:11.015 "subsystem": "nbd", 00:35:11.015 "config": [] 00:35:11.015 } 00:35:11.015 ] 00:35:11.015 }' 00:35:11.015 10:52:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:11.015 [2024-11-20 10:52:11.707667] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:35:11.015 [2024-11-20 10:52:11.707721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767670 ] 00:35:11.273 [2024-11-20 10:52:11.782486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.273 [2024-11-20 10:52:11.825405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.273 [2024-11-20 10:52:11.987122] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:11.838 10:52:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.838 10:52:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:11.838 10:52:12 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:11.838 10:52:12 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:11.838 10:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.096 10:52:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:12.096 10:52:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:12.096 10:52:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.096 10:52:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.096 10:52:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.096 10:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.096 10:52:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.355 10:52:12 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:12.355 10:52:12 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:12.355 10:52:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:12.355 10:52:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.355 10:52:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.355 10:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.355 10:52:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.612 10:52:13 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:12.612 10:52:13 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:12.612 10:52:13 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:12.612 10:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:12.871 10:52:13 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:12.871 10:52:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:12.871 10:52:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wxuvRra7b7 /tmp/tmp.piiLGFahFq 00:35:12.871 10:52:13 keyring_file -- keyring/file.sh@20 -- # killprocess 3767670 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3767670 ']' 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3767670 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767670 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767670' 00:35:12.871 killing process with pid 3767670 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@973 -- # kill 3767670 00:35:12.871 Received shutdown signal, test time was about 1.000000 seconds 00:35:12.871 00:35:12.871 Latency(us) 00:35:12.871 [2024-11-20T09:52:13.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.871 [2024-11-20T09:52:13.602Z] =================================================================================================================== 00:35:12.871 [2024-11-20T09:52:13.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@978 -- # wait 3767670 00:35:12.871 10:52:13 keyring_file -- keyring/file.sh@21 -- # killprocess 3766143 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3766143 ']' 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3766143 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.871 10:52:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3766143 00:35:13.130 10:52:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:13.130 10:52:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:13.130 10:52:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3766143' 00:35:13.130 killing process with pid 3766143 00:35:13.130 10:52:13 keyring_file -- common/autotest_common.sh@973 -- # kill 3766143 00:35:13.130 10:52:13 keyring_file -- common/autotest_common.sh@978 -- # wait 3766143 00:35:13.389 00:35:13.389 real 0m11.892s 00:35:13.389 user 0m29.656s 00:35:13.389 sys 0m2.662s 00:35:13.389 10:52:13 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.389 10:52:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:13.389 ************************************ 00:35:13.389 END TEST keyring_file 00:35:13.389 ************************************ 00:35:13.389 10:52:13 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:13.389 10:52:13 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:13.389 10:52:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:13.389 10:52:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.389 10:52:13 -- common/autotest_common.sh@10 -- # set +x 00:35:13.389 ************************************ 00:35:13.389 START TEST keyring_linux 00:35:13.389 ************************************ 00:35:13.389 10:52:13 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:13.389 Joined session keyring: 1069500274 00:35:13.389 * Looking for test storage... 00:35:13.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:13.389 10:52:14 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:13.389 10:52:14 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:13.389 10:52:14 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:13.649 10:52:14 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.649 10:52:14 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:13.649 10:52:14 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.649 10:52:14 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:13.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.649 --rc genhtml_branch_coverage=1 00:35:13.649 --rc genhtml_function_coverage=1 00:35:13.649 --rc genhtml_legend=1 00:35:13.649 --rc geninfo_all_blocks=1 00:35:13.649 --rc geninfo_unexecuted_blocks=1 00:35:13.649 00:35:13.649 ' 00:35:13.649 10:52:14 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:13.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.649 --rc genhtml_branch_coverage=1 00:35:13.649 --rc genhtml_function_coverage=1 00:35:13.649 --rc genhtml_legend=1 00:35:13.649 --rc geninfo_all_blocks=1 00:35:13.649 --rc geninfo_unexecuted_blocks=1 00:35:13.649 00:35:13.649 ' 00:35:13.649 10:52:14 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:13.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.649 --rc genhtml_branch_coverage=1 00:35:13.649 --rc genhtml_function_coverage=1 00:35:13.649 --rc genhtml_legend=1 00:35:13.649 --rc geninfo_all_blocks=1 00:35:13.649 --rc geninfo_unexecuted_blocks=1 00:35:13.649 00:35:13.649 ' 00:35:13.649 10:52:14 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:13.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.649 --rc genhtml_branch_coverage=1 00:35:13.649 --rc genhtml_function_coverage=1 00:35:13.649 --rc genhtml_legend=1 00:35:13.649 --rc geninfo_all_blocks=1 00:35:13.649 --rc geninfo_unexecuted_blocks=1 00:35:13.649 00:35:13.649 ' 00:35:13.649 10:52:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:13.649 10:52:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.649 10:52:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:13.649 10:52:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.649 10:52:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.649 10:52:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.649 10:52:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.650 10:52:14 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.650 10:52:14 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.650 10:52:14 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.650 10:52:14 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.650 10:52:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.650 10:52:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.650 10:52:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.650 10:52:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:13.650 10:52:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:13.650 /tmp/:spdk-test:key0 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:13.650 10:52:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:13.650 10:52:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:13.650 /tmp/:spdk-test:key1 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3768226 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3768226 00:35:13.650 10:52:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:13.650 10:52:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3768226 ']' 00:35:13.650 10:52:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.650 10:52:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.650 10:52:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.650 10:52:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.650 10:52:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:13.650 [2024-11-20 10:52:14.333988] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:35:13.650 [2024-11-20 10:52:14.334042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768226 ] 00:35:13.910 [2024-11-20 10:52:14.397124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.910 [2024-11-20 10:52:14.441932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:14.169 10:52:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:14.169 [2024-11-20 10:52:14.670793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.169 null0 00:35:14.169 [2024-11-20 10:52:14.702848] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:14.169 [2024-11-20 10:52:14.703240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.169 10:52:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:14.169 645421635 00:35:14.169 10:52:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:14.169 872880537 00:35:14.169 10:52:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3768277 00:35:14.169 10:52:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3768277 /var/tmp/bperf.sock 00:35:14.169 10:52:14 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3768277 ']' 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.169 10:52:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:14.169 [2024-11-20 10:52:14.776135] Starting SPDK v25.01-pre git sha1 876509865 / DPDK 24.03.0 initialization... 00:35:14.169 [2024-11-20 10:52:14.776179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768277 ] 00:35:14.169 [2024-11-20 10:52:14.851805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.169 [2024-11-20 10:52:14.894650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.427 10:52:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.427 10:52:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:14.427 10:52:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:14.427 10:52:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:14.427 10:52:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:14.427 10:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:14.686 10:52:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:14.686 10:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:14.944 [2024-11-20 10:52:15.546794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:14.944 nvme0n1 00:35:14.944 10:52:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:14.944 10:52:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:14.945 10:52:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:14.945 10:52:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:14.945 10:52:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:14.945 10:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.203 10:52:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:15.203 10:52:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:15.203 10:52:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:15.203 10:52:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:15.203 10:52:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.203 10:52:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:15.203 10:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@25 -- # sn=645421635 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 645421635 == \6\4\5\4\2\1\6\3\5 ]] 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 645421635 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:15.461 10:52:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.461 Running I/O for 1 seconds... 00:35:16.650 21359.00 IOPS, 83.43 MiB/s 00:35:16.650 Latency(us) 00:35:16.650 [2024-11-20T09:52:17.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.650 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:16.650 nvme0n1 : 1.01 21359.46 83.44 0.00 0.00 5973.26 5071.92 13848.04 00:35:16.650 [2024-11-20T09:52:17.381Z] =================================================================================================================== 00:35:16.650 [2024-11-20T09:52:17.381Z] Total : 21359.46 83.44 0.00 0.00 5973.26 5071.92 13848.04 00:35:16.650 { 00:35:16.650 "results": [ 00:35:16.650 { 00:35:16.650 "job": "nvme0n1", 00:35:16.650 "core_mask": "0x2", 00:35:16.650 "workload": "randread", 00:35:16.650 "status": "finished", 00:35:16.650 "queue_depth": 128, 00:35:16.650 "io_size": 4096, 00:35:16.650 "runtime": 1.005971, 00:35:16.650 "iops": 21359.462648525652, 00:35:16.650 "mibps": 83.43540097080333, 00:35:16.650 "io_failed": 0, 00:35:16.650 "io_timeout": 0, 00:35:16.650 "avg_latency_us": 5973.262124520185, 00:35:16.650 "min_latency_us": 5071.91652173913, 00:35:16.650 "max_latency_us": 13848.041739130435 00:35:16.650 } 00:35:16.650 ], 00:35:16.650 "core_count": 1 00:35:16.650 } 00:35:16.650 10:52:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:16.650 10:52:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:16.650 10:52:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:16.650 10:52:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:16.650 10:52:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:16.650 10:52:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:16.650 10:52:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:16.650 10:52:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.916 10:52:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:16.916 10:52:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:16.916 10:52:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:16.916 10:52:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.916 10:52:17 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:16.916 10:52:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:17.181 [2024-11-20 10:52:17.741626] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:17.181 [2024-11-20 10:52:17.741688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5a70 (107): Transport endpoint is not connected 00:35:17.181 [2024-11-20 10:52:17.742683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5a70 (9): Bad file descriptor 00:35:17.181 [2024-11-20 10:52:17.743684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:17.181 [2024-11-20 10:52:17.743694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:17.181 [2024-11-20 10:52:17.743702] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:17.181 [2024-11-20 10:52:17.743715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:17.181 request: 00:35:17.181 { 00:35:17.181 "name": "nvme0", 00:35:17.181 "trtype": "tcp", 00:35:17.181 "traddr": "127.0.0.1", 00:35:17.181 "adrfam": "ipv4", 00:35:17.181 "trsvcid": "4420", 00:35:17.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.181 "prchk_reftag": false, 00:35:17.181 "prchk_guard": false, 00:35:17.181 "hdgst": false, 00:35:17.181 "ddgst": false, 00:35:17.181 "psk": ":spdk-test:key1", 00:35:17.181 "allow_unrecognized_csi": false, 00:35:17.181 "method": "bdev_nvme_attach_controller", 00:35:17.181 "req_id": 1 00:35:17.181 } 00:35:17.181 Got JSON-RPC error response 00:35:17.181 response: 00:35:17.181 { 00:35:17.181 "code": -5, 00:35:17.181 "message": "Input/output error" 00:35:17.181 } 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@33 -- # sn=645421635 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 645421635 00:35:17.181 1 links removed 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@33 -- # sn=872880537 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 872880537 00:35:17.181 1 links removed 00:35:17.181 10:52:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3768277 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3768277 ']' 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3768277 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768277 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768277' 00:35:17.181 killing process with pid 3768277 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@973 -- # kill 3768277 00:35:17.181 Received shutdown signal, test time was about 1.000000 seconds 00:35:17.181 00:35:17.181 Latency(us) 00:35:17.181 [2024-11-20T09:52:17.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.181 [2024-11-20T09:52:17.912Z] =================================================================================================================== 00:35:17.181 [2024-11-20T09:52:17.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.181 10:52:17 keyring_linux -- common/autotest_common.sh@978 -- # wait 3768277 00:35:17.440 10:52:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3768226 00:35:17.440 10:52:17 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3768226 ']' 00:35:17.440 10:52:17 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3768226 00:35:17.440 10:52:17 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:17.440 10:52:17 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.440 10:52:17 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768226 00:35:17.440 10:52:18 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.440 10:52:18 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.440 10:52:18 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768226' 00:35:17.440 killing process with pid 3768226 00:35:17.440 10:52:18 keyring_linux -- common/autotest_common.sh@973 -- # kill 3768226 00:35:17.440 10:52:18 keyring_linux -- common/autotest_common.sh@978 -- # wait 3768226 00:35:17.698 00:35:17.698 real 0m4.348s 00:35:17.698 user 0m8.280s 00:35:17.698 sys 0m1.390s 00:35:17.698 10:52:18 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.698 10:52:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.698 ************************************ 00:35:17.698 END TEST keyring_linux 00:35:17.698 ************************************ 00:35:17.698 10:52:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:17.698 10:52:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:17.698 10:52:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:17.698 10:52:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:17.698 10:52:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:17.698 10:52:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:17.698 10:52:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:17.698 10:52:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.698 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:35:17.698 10:52:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:17.698 10:52:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:17.698 10:52:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:17.698 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:35:23.120 INFO: APP EXITING 00:35:23.120 INFO: killing all VMs 00:35:23.120 INFO: killing vhost app 00:35:23.120 INFO: EXIT DONE 00:35:25.666 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:25.666 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:25.666 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:28.962 Cleaning 00:35:28.962 Removing: /var/run/dpdk/spdk0/config 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:28.962 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:28.962 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:28.962 Removing: /var/run/dpdk/spdk1/config 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:28.962 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:28.962 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:28.962 Removing: /var/run/dpdk/spdk2/config 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:28.962 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:28.962 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:28.962 Removing: /var/run/dpdk/spdk3/config 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:28.962 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:28.962 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:28.962 Removing: /var/run/dpdk/spdk4/config 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:28.962 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:28.962 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:28.962 Removing: /dev/shm/bdev_svc_trace.1 00:35:28.962 Removing: /dev/shm/nvmf_trace.0 00:35:28.962 Removing: /dev/shm/spdk_tgt_trace.pid3287872 00:35:28.962 Removing: /var/run/dpdk/spdk0 00:35:28.962 Removing: /var/run/dpdk/spdk1 00:35:28.962 Removing: /var/run/dpdk/spdk2 00:35:28.962 Removing: /var/run/dpdk/spdk3 00:35:28.962 Removing: /var/run/dpdk/spdk4 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3285579 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3286651 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3287872 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3288366 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3289318 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3289546 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3290523 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3290531 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3290884 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3292415 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3293838 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3294271 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3294608 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3294828 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3294989 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3295241 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3295490 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3295943 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3296904 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3299910 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3300167 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3300421 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3300424 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3300918 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3300928 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3301417 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3301426 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3301710 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3301914 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3302059 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3302177 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3302657 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3302834 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3303185 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3306999 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3311284 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3321540 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3322231 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3326510 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3326893 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3331244 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3337141 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3339870 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3350466 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3359604 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3361223 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3362181 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3379348 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3383327 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3429943 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3435185 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3441094 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3447943 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3447945 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3448856 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3449769 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3450683 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3451155 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3451157 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3451407 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3451618 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3451620 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3452538 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3453422 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3454191 00:35:28.962 Removing: /var/run/dpdk/spdk_pid3454894 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3455055 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3455292 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3456322 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3457301 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3465617 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3494955 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3499461 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3501086 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3502915 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3502936 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3503172 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3503270 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3503777 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3505529 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3506510 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3506910 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3509103 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3509596 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3510104 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3514510 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3520487 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3520489 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3520491 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3524299 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3532663 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3536604 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3542577 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3543770 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3545313 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3546638 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3551298 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3555607 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3559692 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3567553 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3567712 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3572534 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3572765 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3572989 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3573306 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3573389 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3577805 00:35:29.221 Removing: /var/run/dpdk/spdk_pid3578303 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3582757 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3585387 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3590783 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3596115 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3604897 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3612174 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3612185 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3631207 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3631682 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3632320 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3632843 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3633577 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3634058 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3634597 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3635216 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3639247 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3639491 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3645551 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3645807 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3651080 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3655529 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3665630 00:35:29.222 Removing: /var/run/dpdk/spdk_pid3666246 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3670496 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3670746 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3674973 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3680628 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3683215 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3693168 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3701836 00:35:29.480 Removing: /var/run/dpdk/spdk_pid3703558 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3704380 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3720988 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3724804 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3727609 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3735454 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3735460 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3740498 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3742457 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3744419 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3745477 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3747651 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3748720 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3757969 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3758431 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3758890 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3761259 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3761823 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3762311 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3766143 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3766156 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3767670 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3768226 00:35:29.481 Removing: /var/run/dpdk/spdk_pid3768277 00:35:29.481 Clean 00:35:29.481 10:52:30 -- common/autotest_common.sh@1453 -- # return 0 00:35:29.481 10:52:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:29.481 10:52:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.481 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:35:29.481 10:52:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:29.481 10:52:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.481 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:35:29.740 10:52:30 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:29.740 10:52:30 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:29.740 10:52:30 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:29.740 10:52:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:29.740 10:52:30 -- spdk/autotest.sh@398 -- # hostname 00:35:29.740 10:52:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:29.740 geninfo: WARNING: invalid characters removed from testname! 00:35:51.676 10:52:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:53.582 10:52:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:55.488 10:52:56 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.393 10:52:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.297 10:52:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:01.206 10:53:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:03.110 10:53:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:03.110 10:53:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:03.110 10:53:03 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:03.110 10:53:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:03.110 10:53:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:03.110 10:53:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:03.110 + [[ -n 3208261 ]] 00:36:03.110 + sudo kill 3208261 00:36:03.120 [Pipeline] } 00:36:03.136 [Pipeline] // stage 00:36:03.141 [Pipeline] } 00:36:03.156 [Pipeline] // timeout 00:36:03.160 [Pipeline] } 00:36:03.175 [Pipeline] // catchError 00:36:03.179 [Pipeline] } 00:36:03.194 [Pipeline] // wrap 00:36:03.201 [Pipeline] } 00:36:03.216 [Pipeline] // catchError 00:36:03.227 [Pipeline] stage 00:36:03.229 [Pipeline] { (Epilogue) 00:36:03.243 [Pipeline] catchError 00:36:03.245 [Pipeline] { 00:36:03.258 [Pipeline] echo 00:36:03.260 Cleanup processes 00:36:03.266 [Pipeline] sh 00:36:03.552 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:03.552 3778894 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:03.567 [Pipeline] sh 00:36:03.853 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:03.853 ++ grep -v 'sudo pgrep' 00:36:03.853 ++ awk '{print $1}' 00:36:03.853 + sudo kill -9 00:36:03.853 + true 00:36:03.866 [Pipeline] sh 00:36:04.150 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:16.375 [Pipeline] sh 00:36:16.660 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:16.660 Artifacts sizes are good 00:36:16.675 [Pipeline] archiveArtifacts 00:36:16.683 Archiving artifacts 00:36:16.826 [Pipeline] sh 00:36:17.113 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:17.127 [Pipeline] cleanWs 00:36:17.212 [WS-CLEANUP] Deleting project workspace... 00:36:17.212 [WS-CLEANUP] Deferred wipeout is used... 00:36:17.252 [WS-CLEANUP] done 00:36:17.254 [Pipeline] } 00:36:17.273 [Pipeline] // catchError 00:36:17.286 [Pipeline] sh 00:36:17.570 + logger -p user.info -t JENKINS-CI 00:36:17.579 [Pipeline] } 00:36:17.593 [Pipeline] // stage 00:36:17.598 [Pipeline] } 00:36:17.611 [Pipeline] // node 00:36:17.616 [Pipeline] End of Pipeline 00:36:17.657 Finished: SUCCESS